Even_Adder@lemmy.dbzer0.com to Stable Diffusion@lemmy.dbzer0.comEnglish · 19 days agoStable Diffusion 3 Medium Fine-tuning Tutorial — Stability AIstability.aiexternal-linkmessage-square8fedilinkarrow-up113arrow-down11
arrow-up112arrow-down1external-linkStable Diffusion 3 Medium Fine-tuning Tutorial — Stability AIstability.aiEven_Adder@lemmy.dbzer0.com to Stable Diffusion@lemmy.dbzer0.comEnglish · 19 days agomessage-square8fedilink
minus-squareclb92@feddit.dklinkfedilinkEnglisharrow-up1·edit-219 days agoPeople have been training great Flux LoRAs for a while now, haven’t they? Is a LoRA not a finetune, or have I misunderstood something?
minus-squareEven_Adder@lemmy.dbzer0.comOPlinkfedilinkEnglisharrow-up1arrow-down1·19 days agoLast I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
minus-squareclb92@feddit.dklinkfedilinkEnglisharrow-up2·19 days agoOh well, in practice I’ll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊
minus-squareerenkoylu@lemmy.mllinkfedilinkEnglisharrow-up1arrow-down1·edit-217 days agoquite the opposite. Lora’s are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).
People have been training great Flux LoRAs for a while now, haven’t they? Is a LoRA not a finetune, or have I misunderstood something?
Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
Oh well, in practice I’ll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊
quite the opposite. Lora’s are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).