-
Notifications
You must be signed in to change notification settings - Fork 0
Old changes #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Old changes #88
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Summary of ChangesHello @levtechs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers a major upgrade to the platform, focusing on expanding its educational capabilities and administrative features. It introduces a sophisticated course management system, complete with structured lessons, integrated quizzes, and detailed analytics. The core AI chat experience has been significantly enhanced to provide streaming responses, personalized interactions through new preference settings, and more intelligent generation of study cards and content hierarchies. Additionally, new administrative tools offer better oversight of user and project activity, while various UI components have been refined to improve overall usability and engagement. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This is a massive pull request that introduces a significant number of new features and major architectural changes across the entire application. Key additions include a comprehensive admin panel, a course creation and management system, a quiz generation and grading feature, a commenting system, and a move to a streaming chat API. The data models have been substantially updated to support these new features, and the UI has been enhanced with theming and many new components. While the scope of these changes is impressive, it would be highly beneficial to break such large updates into smaller, more focused pull requests in the future to facilitate easier review and testing. I've identified a few critical and high-severity issues related to performance, scalability, and API design that should be addressed, along with some medium-severity suggestions for improvement.
| // Tally user action | ||
| const userDocRef = doc(db, "users", uid); | ||
| const userDoc = await getDoc(userDocRef); | ||
| const userData = userDoc.exists() ? userDoc.data() : {}; | ||
| const now = Date.now(); | ||
|
|
||
| // Handle daily actions | ||
| let dailyActions = userData.dailyActions || 0; | ||
| let lastResetDaily = userData.lastResetDaily || 0; | ||
| if (now - lastResetDaily > 24 * 60 * 60 * 1000) { | ||
| dailyActions = 0; | ||
| lastResetDaily = now; | ||
| } | ||
| dailyActions += 1; | ||
|
|
||
| // Handle weekly actions | ||
| let weeklyActions = userData.weeklyActions || 0; | ||
| let lastResetWeekly = userData.lastResetWeekly || 0; | ||
| if (now - lastResetWeekly > 7 * 24 * 60 * 60 * 1000) { | ||
| weeklyActions = 0; | ||
| lastResetWeekly = now; | ||
| } | ||
| weeklyActions += 1; | ||
|
|
||
| const totalActions = (userData.actions || 0) + 1; | ||
|
|
||
| await setDoc(userDocRef, { | ||
| actions: totalActions, | ||
| dailyActions, | ||
| lastResetDaily, | ||
| weeklyActions, | ||
| lastResetWeekly | ||
| }, { merge: true }); | ||
|
|
||
| return uid; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This implementation for tracking user actions will cause significant performance and cost issues. The getVerifiedUid function is called for every authenticated API request. Placing the action tracking logic here means that every single API call will trigger a read and a write operation on the user's document in Firestore. As user activity increases, this will rapidly consume your Firestore quotas and lead to unnecessary costs and potential performance bottlenecks.
This logic should be extracted into a separate, explicit function (e.g., trackUserAction(uid)) and called only from the specific API routes that represent a meaningful, trackable user action, rather than being a side effect of every authentication check.
| export async function getUsers(limitNum: number, lastId?: string): Promise<User[]> { | ||
| try { | ||
| // Fetch all users since orderBy excludes documents without the field | ||
| const q = query(collection(db, "users")); | ||
| const snapshot = await getDocs(q); | ||
| const allUsers = snapshot.docs.map(doc => { | ||
| const data = doc.data(); | ||
| return { | ||
| id: doc.id, | ||
| email: data.email, | ||
| displayName: data.displayName, | ||
| actions: data.actions, | ||
| dailyActions: data.dailyActions, | ||
| weeklyActions: data.weeklyActions ?? 0, | ||
| signUpResponses: data.signUpResponses, | ||
| starUser: data.starUser, | ||
| } as User; | ||
| }); | ||
| // Sort by weeklyActions desc | ||
| allUsers.sort((a, b) => (b.weeklyActions ?? 0) - (a.weeklyActions ?? 0)); | ||
| // Paginate | ||
| if (!lastId) { | ||
| return allUsers.slice(0, limitNum); | ||
| } else { | ||
| const lastIndex = allUsers.findIndex(user => user.id === lastId); | ||
| if (lastIndex === -1) return []; | ||
| return allUsers.slice(lastIndex + 1, lastIndex + 1 + limitNum); | ||
| } | ||
| } catch (err) { | ||
| console.error("Error fetching users:", err); | ||
| throw err; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This getUsers function is not scalable. It fetches all user documents from the database into memory, sorts them, and then performs pagination on the in-memory array. This will become very slow and expensive as the number of users grows.
The comment on line 33 (// Fetch all users since orderBy excludes documents without the field) indicates the reason for this approach, but it's addressing the problem in the wrong place. Instead of fetching all documents, you should ensure that the weeklyActions field always exists on user documents (e.g., by setting a default value of 0 when a user is created). This would allow you to use a direct Firestore query with orderBy('weeklyActions', 'desc') and startAfter() for efficient, server-side pagination.
| export async function getAdminStats(): Promise<{ totalUsers: number; totalProjects: number; totalActions: number; usersWithSignUp: number }> { | ||
| try { | ||
| // Fetch all users | ||
| const usersSnap = await getDocs(collection(db, "users")); | ||
| const users = usersSnap.docs.map(doc => ({ id: doc.id, ...doc.data() } as User)); | ||
| const totalUsers = users.length; | ||
| const totalActions = users.reduce((sum, user) => sum + (user.actions || 0), 0); | ||
| const usersWithSignUp = users.filter(user => user.signUpResponses).length; | ||
|
|
||
| // Fetch all projects | ||
| const projectsSnap = await getDocs(collection(db, "projects")); | ||
| const totalProjects = projectsSnap.size; | ||
|
|
||
| return { totalUsers, totalProjects, totalActions, usersWithSignUp }; | ||
| } catch (err) { | ||
| console.error("Error fetching admin stats:", err); | ||
| throw err; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function fetches all user and project documents to calculate aggregate statistics. This is inefficient and will not scale. As the number of documents increases, this operation will become slow and costly.
Consider using Firestore's built-in aggregation queries, such as count(), to get the total number of users and projects directly from the server without fetching all the documents. For totalActions, it would be more scalable to implement a distributed counter that is updated atomically, rather than summing the actions from all user documents on every request.
| export const groundingChunksToCardsAndWrite = async ( | ||
| projectId: string, | ||
| oldCards: Card[], | ||
| groundingChunks: GroundingChunk[], | ||
| ): Promise<Card[]> => { | ||
| // Priority function | ||
| const getPriority = (uri: string): number => { | ||
| const domain = new URL(uri).hostname.toLowerCase(); | ||
| if (domain.includes('youtube.com')) return 10; | ||
| if (domain.includes('wikipedia.org')) return 9; | ||
| if (domain.endsWith('.org') || domain.endsWith('.edu') || domain.endsWith('.gov')) return 5; | ||
| return 1; | ||
| }; | ||
|
|
||
| // Sort chunks by priority desc | ||
| const sortedChunks = groundingChunks.sort((a, b) => getPriority(b.web.uri) - getPriority(a.web.uri)); | ||
|
|
||
| const newCardsRaw: NewCard[] = []; | ||
|
|
||
| // We will limit the number of websites scraped with this cost. | ||
| // Some may be cheaper to scrape or are more usefull so they will contribute to cost less | ||
| let cost: number = 0; | ||
|
|
||
| // Process other chunks | ||
| for (const chunk of sortedChunks) { | ||
| if (cost > 5) break; | ||
| const uri = chunk.web.uri; | ||
|
|
||
| // Check if a card with this uri already exists | ||
| const existingCard = oldCards.find(card => card.url === uri); | ||
| if (existingCard) { | ||
| continue; // Skip if already exists | ||
| } | ||
|
|
||
| try { | ||
| // Fetch the webpage to extract images with timeout | ||
| const controller = new AbortController(); | ||
| const timeoutId = setTimeout(() => controller.abort(), 10000); // 10s timeout | ||
| const response = await fetch(uri, { | ||
| redirect: "follow", | ||
| signal: controller.signal | ||
| }); | ||
| clearTimeout(timeoutId); | ||
| const html = await response.text(); | ||
|
|
||
| cost += 1; | ||
|
|
||
| // Extract title | ||
| const titleMatch = html.match(/<title[^>]*>([\s\S]*?)<\/title>/i); | ||
| const scrapedTitle = titleMatch ? titleMatch[1].trim() : chunk.web.title; | ||
|
|
||
|
|
||
| if (!response.ok) { | ||
| console.warn(`Failed to fetch ${uri}: ${response.status}`); | ||
| // Still create card without images | ||
| newCardsRaw.push({ | ||
| title: `resource: "${scrapedTitle}"`, | ||
| url: response.url, | ||
| exclude: false, | ||
| }); | ||
| continue; | ||
| } | ||
|
|
||
| // Youtube chunks handled differently | ||
| if (response.url.includes("youtube.com") || response.url.includes("youtu.be")) { | ||
| try { | ||
| const data = await getYoutubeData(response.url); | ||
| const desc = data.description.length > 80 ? data.description.slice(0, 77) + "..." : data.description; | ||
| newCardsRaw.push({ | ||
| title: `resource: "${data.title}"`, | ||
| url: response.url, | ||
| details: [ | ||
| desc, | ||
| `Channel: ${data.channelTitle}`, | ||
| `Duration: ${data.duration || 'N/A'}`, | ||
| `Published: ${new Date(data.publishedAt).toLocaleDateString('en-US', { year: 'numeric', month: 'long', day: 'numeric' })}` | ||
| ], | ||
| refImageUrls: [data.thumbnailUrl], | ||
| exclude: false, | ||
| }); | ||
| } catch (err) { | ||
| console.error(`Error processing YouTube ${response.url}:`, err); | ||
| // Fallback | ||
| newCardsRaw.push({ | ||
| title: `resource: "${chunk.web.title}"`, | ||
| url: chunk.web.uri, | ||
| exclude: false, | ||
| }); | ||
| } | ||
| finally { | ||
| cost -= 1; // Youtube videos shouldn't contribute to cost | ||
| continue; | ||
| } | ||
| } | ||
|
|
||
|
|
||
| // Extract favicon | ||
| let iconUrl: string | undefined; | ||
| const iconMatch = html.match(/<link[^>]*rel=["'](?:icon|shortcut icon)["'][^>]*href=["']([^"']*)["'][^>]*>/i); | ||
| if (iconMatch) { | ||
| try { | ||
| iconUrl = new URL(iconMatch[1], response.url).href; | ||
| } catch { | ||
| iconUrl = new URL('/favicon.ico', response.url).href; | ||
| } | ||
| } else { | ||
| iconUrl = new URL('/favicon.ico', response.url).href; | ||
| } | ||
|
|
||
| // Extract first heading | ||
| const headingMatch = html.match(/<h[1-2][^>]*>([\s\S]*?)<\/h[1-2]>/i); | ||
| let firstHeading = ''; | ||
| if (headingMatch) { | ||
| // Strip HTML tags | ||
| firstHeading = headingMatch[1].replace(/<[^>]*>/g, '').trim(); | ||
| if (firstHeading.length > 128) { | ||
| firstHeading = firstHeading.slice(0, 125) + '...'; | ||
| } | ||
| } | ||
|
|
||
| // Extract image URLs (limit to first 10) | ||
| const imgMatches = html.match(/<img[^>]*src=["']([^"']*)["'][^>]*>/gi) || []; | ||
| const imageCandidates: string[] = []; | ||
| for (const imgTag of imgMatches.slice(0, 10)) { // Limit to first 10 | ||
| const srcMatch = imgTag.match(/src=["']([^"']*)["']/); | ||
| if (srcMatch) { | ||
| try { | ||
| const imgUrl = new URL(srcMatch[1], response.url).href; | ||
| if (!imgUrl.toLowerCase().includes('.svg') && !imgUrl.toLowerCase().includes('.ico')) { | ||
| imageCandidates.push(imgUrl); | ||
| } | ||
| } catch { | ||
| // Skip invalid URLs | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Fetch image details with timeout and concurrency limit | ||
| const imageDetails = await Promise.allSettled( | ||
| imageCandidates.map(async (url) => { | ||
| try { | ||
| const controller = new AbortController(); | ||
| const timeoutId = setTimeout(() => controller.abort(), 5000); // 5s timeout | ||
| const headRes = await fetch(url, { method: 'HEAD', signal: controller.signal }); | ||
| clearTimeout(timeoutId); | ||
| const contentType = headRes.headers.get('content-type') || ''; | ||
| const contentLength = parseInt(headRes.headers.get('content-length') || '0'); | ||
| if (contentType.startsWith('image/') && contentLength > 1000) { // Filter small or non-images | ||
| return { url, size: contentLength }; | ||
| } | ||
| } catch { | ||
| // Ignore errors | ||
| } | ||
| return null; | ||
| }) | ||
| ).then(results => results.map(r => r.status === 'fulfilled' ? r.value : null)); | ||
|
|
||
| const validImages = imageDetails.filter(Boolean).sort((a, b) => (b?.size || 0) - (a?.size || 0)); | ||
| const refImageUrls = validImages.slice(0, 5).map(img => img!.url); | ||
|
|
||
| newCardsRaw.push({ | ||
| title: `resource: "${scrapedTitle}"`, | ||
| url: response.url, | ||
| iconUrl, | ||
| exclude: false, | ||
| ...(firstHeading && { details: [firstHeading] }), | ||
| ...(refImageUrls.length > 0 && { refImageUrls }), | ||
| }); | ||
| } catch (err) { | ||
| console.error(`Error processing ${uri}:`, err); | ||
| // Create card without images on error | ||
| newCardsRaw.push({ | ||
| title: `resource: "${chunk.web.title}"`, | ||
| url: uri, | ||
| exclude: false, | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // Write new cards to DB | ||
| const newCards = await writeCardsToDb(projectId, newCardsRaw); | ||
|
|
||
| // Return newCards | ||
| return newCards; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Performing web scraping within a synchronous API route like this is risky and can lead to very long response times. The fetch calls to external websites can be slow, unreliable, and may be blocked by CORS or bot detection mechanisms. The multiple fetch calls for images further add to the latency.
This entire scraping and data extraction logic should be moved to an asynchronous background task, such as a serverless function (e.g., Google Cloud Function) triggered by an event. The API route could initiate the task and immediately return a response to the client, which could then poll for the results or receive them via a websocket or another notification mechanism.
| export const writeCardsToDb = async ( | ||
| projectId: string, | ||
| newCards: NewCard[] | ||
| ): Promise<Card[]> => { | ||
| try { | ||
| const cardsCollectionRef = collection(db, "projects", projectId, "cards"); | ||
|
|
||
| const addedCards: Card[] = []; | ||
|
|
||
| for (const card of newCards) { | ||
| const docRef = await addDoc(cardsCollectionRef, card); | ||
| addedCards.push({ | ||
| id: docRef.id, | ||
| ...card, | ||
| }); | ||
| } | ||
|
|
||
| return addedCards; | ||
| } catch (err) { | ||
| console.error("Error writing cards to Firestore:", err); | ||
| throw err; | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using await inside a for...of loop for database writes can be inefficient for bulk operations, as it performs the writes sequentially. For creating multiple cards at once, it's more efficient to use a writeBatch to group all the addDoc operations into a single atomic transaction. This reduces the number of round trips to the database and can be more cost-effective.
| // Check permissions: creator or owner/admin | ||
| let hasPermission = createdBy === uid; | ||
|
|
||
| if (!hasPermission) { | ||
| if (projectId) { | ||
| const projectRef = doc(db, "projects", projectId); | ||
| const projectSnap = await getDoc(projectRef); | ||
| if (projectSnap.exists()) { | ||
| const projectData = projectSnap.data(); | ||
| const ownerId = projectData?.ownerId; | ||
| const userRef = doc(db, "users", uid); | ||
| const userSnap = await getDoc(userRef); | ||
| const userEmail = userSnap.exists() ? userSnap.data()?.email : null; | ||
|
|
||
| if (ownerId === uid || (userEmail && projectData?.collaborators?.includes(userEmail))) { | ||
| hasPermission = true; | ||
| } | ||
| } | ||
| } else if (courseId) { | ||
| const courseRef = doc(db, "courses", courseId); | ||
| const courseSnap = await getDoc(courseRef); | ||
| if (courseSnap.exists()) { | ||
| const courseData = courseSnap.data(); | ||
| const ownerId = courseData?.ownerId; | ||
| const userRef = doc(db, "users", uid); | ||
| const userSnap = await getDoc(userRef); | ||
| const userEmail = userSnap.exists() ? userSnap.data()?.email : null; | ||
|
|
||
| if (ownerId === uid || (userEmail && courseData?.admins?.includes(userEmail))) { | ||
| hasPermission = true; | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (!hasPermission) { | ||
| return NextResponse.json({ error: "Access denied" }, { status: 403 }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The permission checking logic here in the DELETE handler is very similar to the logic in the POST handler. This duplication could lead to inconsistencies if the access control rules change in the future. Consider refactoring this logic into a shared helper function, for example hasInvitePermission(itemId, itemType, uid), to improve maintainability and reduce code duplication.
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces several significant features and refactors, including adding admin roles for courses, course analytics, and the ability to delete invitations. It also refactors the Gemini API calls to use a newer SDK version and removes the isUnlocked flag from cards, which is a good improvement for data modeling. I've found a few issues, including a duplicated block of code that adds context to the chat prompt twice, a potential performance bottleneck in the new course analytics endpoint due to N+1 database queries, and some minor bugs in the course creation and progress calculation logic. Overall, the changes are substantial and move the project forward. Please address the feedback to ensure the new features are robust and performant.
| if (attachments) { | ||
| contents.push({ | ||
| role: "user", | ||
| parts: [{text: `CHAT ATTACHMENTS: ${JSON.stringify(attachments)}`}] | ||
| }) | ||
| } | ||
|
|
||
| if (courseCards && courseCards.length > 0) { | ||
| contents.push({ | ||
| role: "user", | ||
| parts: [{text: `COURSE CARDS: ${JSON.stringify(courseCards)}`}] | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| await Promise.all(lessons.map(async (lesson) => { | ||
| const projs = lessonProjects[lesson.id] || []; | ||
| if (projs.length === 0) { | ||
| progress[lesson.id] = 0; | ||
| } else { | ||
| const progresses: number[] = []; | ||
| for (const proj of projs) { | ||
| try { | ||
| const cards = await fetchCardsFromProject(proj.id); | ||
| const lessonCardIds = new Set(lesson.cardsToUnlock.map(c => c.id)); | ||
| const unlockedCount = cards.filter(card => lessonCardIds.has(card.id)).length; | ||
| const totalCards = lesson.cardsToUnlock.length; | ||
| if (totalCards === 0) { | ||
| progresses.push(100); | ||
| } else { | ||
| progresses.push(Math.round((unlockedCount / totalCards) * 100)); | ||
| } | ||
| } catch (error) { | ||
| progresses.push(0); | ||
| } | ||
| } | ||
| progress[lesson.id] = Math.max(...progresses); | ||
| } | ||
| })); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section for calculating user progress has a potential performance issue. It iterates through users, then lessons, and then projects for each lesson, making a database call fetchCardsFromProject(proj.id) inside the innermost loop. This can lead to a large number of database calls (N+1 problem), which will not scale well with more users, lessons, or projects. Consider refactoring to fetch all necessary cards in a more optimized way, for example, by collecting all project IDs and fetching their cards in a batch if possible, to reduce the number of round trips to the database.
| ...limitedGeneralConfig, | ||
| responseMimeType: "application/json", | ||
| responseSchema: fullLessonSchema, | ||
| responseSchema: courseStructureSchema, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The createLessonFromText function is using courseStructureSchema for its response schema. However, this function is expected to generate a single lesson, so it should be using fullLessonSchema instead to ensure the response from the language model has the correct structure for a lesson.
| responseSchema: courseStructureSchema, | |
| responseSchema: fullLessonSchema, |
| try { | ||
| const cards = await getCards(project.id); | ||
| const unlockedCount = cards.filter((card) => card.isUnlocked).length; | ||
| const unlockedCount = cards.length; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for calculating unlockedCount appears to be incorrect after the refactor. It's currently counting all cards in the project (cards.length), which will likely result in an inaccurate progress percentage. The progress should be based on the number of lesson-specific cards that have been unlocked (i.e., exist in the project). You should filter the project's cards to count only those that match the IDs in lesson.cardsToUnlock.
| const unlockedCount = cards.length; | |
| const unlockedCount = cards.filter(c => lesson.cardsToUnlock.some(lc => lc.id === c.id)).length; |
| let newCardsRaw: NewCard[] = []; | ||
| let unlockedCardIds: string[] = []; | ||
|
|
||
| console.log("Raw card gen response:", jsonString); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } else if (error.status === 429) { | ||
| throw err; | ||
| } else { | ||
| throw err; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.