Organizers
Zac Cross {zacc}โ
Mathew Huerta‑Enochian {mhuertae}โ
Joseph McLaughlin {jmclaug2}โ
โ
@uoregon.edu
2026.04.02 (Week 1)
Upcoming
2026.04.09 (Week 2)
2026.04.16 (Week 3)
- Paper โ
- Presenter โ Simon Casey
2026.04.23 (Week 4)
- Paper โ
- Presenter โ Manish Mathai
2026.04.30 (Week 5)
- Paper โ
- Presenter โ Mathew Huerta-Enochian
2026.05.07 (Week 6)
- Paper โ
- Presenter โ Zacc Cross
Past meetings
2026.03.12 (Week 10w26) KAN-ODEs
2026.03.05 (Week 9w26) RenderFlow
- Paper โ
RenderFlow:
Single-Step Neural Rendering via Flow Matching
- Presenter โ
Manish Mathai
- ยซ Link to talk ยป
- Pathtracing demo โ Link
-
In attendance โ
Simon Casey; Zac Cross; Said Efendiyev; Kai Iverson;
Mathew Huerta-Enochian; Manish Mathai;
Joseph McLaughlin; Eric Zander; Yiyang Zhang.
-
Cookies โ
provided by Eric.
-
Remarks โ
FM has fewers steps during inference and makes it a great candidate
for speeding up image processing applications, and the authors
claim it these fewer steps introduce less "fuzziness" and noise.
Maybe! It seems like whether you choose Diff or FM you're basically
working with the same roll of tools, at least to many of us.
2026.02.26 (Week 8w26) FLOWER
2026.02.20 (Week 7w26)
FM Primer
- Paper โ
Flow Matching Primer
(a)
(b)
(c)
- Presenter โ
Mathew Huerta-Enochian
- Missing... (pester Mathew)
-
About โ
These three papers form much of the foundational theory of
FM. If you can read all or some of them, please do so! If
your time is constrained, we recommend reading them in the
listed order.
-
In attendance โ
Zac Cross; Michael Dushkoff; Said Efendiyev;
Mathew Huerta-Enochian; Manish Mathai; Joseph McLaughlin;
Tamanna Saini; Eric Zander; Yiyang Zhang.
-
Remarks โ
MH: Standard manifold hypothesis is key assumption of FM,
does this actually hold?
MM: How did the authors generate the same images in Lipman?
Overfitting on a small dataset?
(b) does a diffusion-like operation to straighten the flows
to more closely approximate OT (note that we are still
working within the SMH here). The rationale for straighter
flows is faster generation.
(c) OT mini batches to learn OT over the whole model.
MH: Octagauss? OT fields can cross as long as they do not
cross at the same time. MD: why doesn't (c) do a quantitive
analysis on the manifold? TS: Greedy vs stochastic OT.
MD: Are there limitations when emerge when the problem is high
dimensional? (Maybe).
EZ: Non-gaussian sources? Varying the gaussian has an impact.
Could you do transfer learning this way?
TS: applications of greedy OT vs stochastic.
MM: How robust is FM to changes in the source? Example
application - denoising.
JM: does the SMH make sense? Even if the objective is
lower-dimension, is it strictly continuous?
MH: This is why CNFs are cool(er than FM) because they don't
assume the SMH.
2026.02.13 (Week 6w26)
DiffusionSat
- Paper โ DiffusionSat (2024)
- Presenter โ
Michael Dushkoff
-
ยซ Link to talk ยป
-
In attendance โ
Zac Cross; Michael Dushkoff; Said Efendiyev;
Mathew Huerta-Enochian; Hritvik Jekki Venkateshwarulu;
Manish Mathai; Joseph McLaughlin; Eric Zander.
-
Comments โ
Stable diffusion trained on clip with a control net.
Would this generalize or is this paper just an impressive
demonstration? Interesting enough to try in Michael's applied
microscope tachography research.
More papers from an even less
organized time.