Now accepting early access signups

AI Lecture Notes in Real Time

Stop taking notes.
Start actually learning.

AI notes and study tools that appear while your professor is still talking.

PopScribe AI Introduction to Neural Networks
Live Transcript
10:22...the loss function tells us how wrong
10:22our prediction is. We want to minimize
10:23that. So the gradient gives us direction
10:23and backpropagation computes it layer
10:24by layer using the chain rule. Google
10:24published a paper on this called
10:24TensorFlow that changed everything.
10:25The weight update formula is
AI Notes
Backpropagation & Gradient Descent
Overview
Loss function measures prediction error
Gradient direction of steepest descent
Chain rule applied layer-by-layer (backward)
Weight Update Rule
wnew = wold η · Lw
η (eta) = learning rate, controls step size
Neural Network Architecture
Data Flow
InputHidden ×3OutputLoss backpropagation
Forward pass: input hidden output
Backward pass: loss gradients weight updates
Instant Insights
Key Term
Chain Rule Derivative of composite functions. In backprop: computes how each layer contributes to total error.
Google DeepMind
AI Research Lab · Founded 2010 · London, UK
In the News
AlphaFold 3 predicts all molecular interactions · 2 days ago
Gemini 2.0 tops coding benchmarks · 1 week ago
Connection
Relates to gradient descent from Week 2. Backprop computes the gradients that gradient descent uses to update weights.
AI Notes
Backpropagation & Gradient Descent
Loss function measures prediction error
Gradient direction of steepest descent
Chain rule applied layer-by-layer (backward)
Weight Update Rule
wnew = wold η · Lw
Chat with Scribe
You
Can you explain backpropagation like Im 5?
Scribe
Imagine youre throwing darts blindfolded. Each throw, a friend tells you too far left or too high. You adjust a little each time. Thats backpropagation the network makes a guess, checks how wrong it was, then adjusts. Each layer of the network gets its own correction, working backward from the final answer.
You
Whats the difference between that and gradient descent?
Scribe
Great question! Theyre partners: backprop figures out which direction to adjust (computes the gradients). Gradient descent decides how big a step to take (uses those gradients to update the weights). The η in your notes controls the step size.
Curious about something? Ask me...Enter
ClozeBloom: Apply
In backpropagation, the ______ is applied layer-by-layer to compute how each weight contributes to the total error.
A rule for differentiating composite functions
Again
Hard
Good
Easy
3 streak7 / 12 cardsSection: Backpropagation

13+ years in tech. Still cant take notes fast enough to keep up in class.

How PopScribe Works

1

Start your session

Open the app on your laptop before class. One click.

2

Go to class

Pay attention to your professor. PopScribe takes care of the rest.

3

Study smarter

Notes, flashcards, and insights — ready before you leave the room.

Real-Time AI Lecture Notes and Study Tools

Real-Time Lecture Notes

Structured notes that write themselves as the lecture happens. You just... learn.

Instant Insights

Key concepts explained as they come up — like sitting next to someone who already read the textbook.

Chat with Your Notes

Curious about a concept? Ask your notes. They were paying attention too.

AI Flashcards

Flashcards that make themselves. Built from what you just learned. Export to Anki or study in the app.

Your Data, Your Control

Your notes and transcripts are stored only on your device. Audio is streamed for transcription and never stored. Images and text are processed by AI services but never retained by us. No one trains on your data. You own everything.

Built by a Student, for Students

Im Brian. Ive spent 13+ years in tech at companies like GitLab, Fastly, and Intercom. Now Im studying AI at Laney College in Oakland and I still cant take notes fast enough to keep up with what the professor is saying. So I started building something to fix it.
BF
Brian Flood
Founder, PopScribe AI

Join the Waitlist

Be one of the first to try PopScribe.

No spam. Just early access updates.