<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Deep Learning on Ethan's Blog</title><link>https://blog.ethanlyu.top/tags/deep-learning/</link><description>Recent content in Deep Learning on Ethan's Blog</description><generator>Hugo -- 0.152.2</generator><language>en</language><lastBuildDate>Tue, 21 Oct 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.ethanlyu.top/tags/deep-learning/index.xml" rel="self" type="application/rss+xml"/><item><title>Deep Generative Model I</title><link>https://blog.ethanlyu.top/posts/deep-generative-model-i/</link><pubDate>Tue, 21 Oct 2025 00:00:00 +0000</pubDate><guid>https://blog.ethanlyu.top/posts/deep-generative-model-i/</guid><description>&lt;p&gt;The rise of large models has advanced the way of how we move towards to the intelligence. In the past decades, people worked on &amp;ldquo;discriminative&amp;rdquo; intelligence, that is, we predict a dog image as a &amp;ldquo;dog&amp;rdquo;. To do so, we train on a batch of dataset containing different categories with labels, and train with the goal that every output matches the corresponding classes. The great success of LLM has brought us into a new era: GenAI. Unlike the discriminative model, we generate the image &amp;ldquo;dog&amp;rdquo; given the input prompt (text, image, &amp;hellip;). This is called &lt;em&gt;generative model&lt;/em&gt;.
&lt;img loading="lazy" src="https://blog.ethanlyu.top/attachment/40665896e722d64746aa9972abd6199d.png"&gt;&lt;/p&gt;</description></item></channel></rss>