Tutorial Info 🚀 Welcome to the Ultimate SECourses Upscaler Pro & Trellis 3D Tutorial!
Greetings everyone! Today, I am incredibly excited to showcase the massive new improvements and brand-new features we have added to the SECourses Upscaler Pro application. I have been working non-stop to bring you a studio-level AI video and image enhancement tool that completely redefines what is possible running locally on your own PC.
In this video, we dive deep into side-by-side comparisons between our custom FlashVSR+ upscaler, original viral social media videos, and Topaz AI. As you will see in our live slider comparisons, the SECourses Upscaler Pro is adding 10x more detail than Topaz AI, generating breathtaking, high-definition results while running highly optimized on GPUs with as little as 8GB of VRAM!
We also explore the immensely powerful SeedVR2 model for flawless 4x image upscaling, and I give you an exclusive sneak peek at our upcoming Trellis Image-to-3D application featuring fully automated UniRig 3D character rigging!
SECourses Ultimate Video and Image Upscaler Pro is now V2.1 and massive improvements has arrived
Check all below screenshots to see all amazing features
20 Feburary 2026 Update V2.1 This is a pretty big update
We have 100% changed the FlashVSR+ backend to a new repo and I have significantly upgraded this repo
The new FlashVSR+ works amazing and I think it is better than SeedVR2 for high res videos upscale like upscaling 720p into higher resolution
Top menu navigation bar updated into a better version and view
FlashVSR+ tab remade and all the features are now working
For lower VRAM a button is added which you can use if you get OOM
Read the updated UI to understand how to use
FlashVSR+ now can upscale images very well as well
Image Based GAN upscalers tab also improved and some bugs fixed
Output & Comparison tab Video Output was not working properly and this issue fix fixed
In Output & Comparison tab, new multi video and multi image comparison sliders added which is super useful to quickly compare multiple videos and images
Lots of various bug fixes made
App is getting closer to be perfect please heavily test it and let me know errors and what features you request
This update was mostly about improving the FlashVSR+ since it is a very fast and amazing video upscaler model
Image Based - Gan upscale now can upscale videos perfectly fine and Batch Size (Frames per Iteration) is now working to speed up upscaling videos
For updating, get the latest zip file, extract and overwrite all files and run Windows_Run_SECourses_Upscaler_Pro.bat file
It has been long waited to have a studio level video and image upscaler app. Today we have publishing the version 1.0 of SECourses Ultimate Video and Image Upscaler Pro. It is supporting SeedVR2, FlashVSR+, Gan based upscalers, RIFE frame interpolation, full queue system, full batch folder processing, scene / chunked based processing and many more. It is fully working on every cloud and consumer GPUs like RTX 2000, 3000, 4000, 5000 series and H100, H200, B200, RTX PRO 6000. We are installing app with latest Torch and CUDA versions atm all fully automatic with pre-compiled libraries. Even Torch compile is fully and automatically working.
Info LTX 2 is the newest state of the art (SOTA) Open Source video generation model and tutorial will show you how to use it with very best and most performant way in ComfyUI and also in SwarmUI. Moreover, Z Image Base model published and I will show how to use Z Image Base with most amazing preset and workflow as well. Furthermore, this tutorial will show you how to install, update, setup, download ComfyUI and SwarmUI and models and presets and workflows both on Windows and on RunPod, Massed Compute and SimplePod. Linux users can use Massed Compute scripts and installers directly. This is a masterpiece entire lecture level complete tutorial. This video will kickstart your AI journey 100x. Both local Windows and Cloud.
45 Second Raw Demo Video
This video made with text + image + audio = lip synched and animated video at once
Compared Quality and Speed Difference (with CUDA 13 & Sage Attention) of BF16 vs GGUF Q8 vs FP8 Scaled vs NVFP4 for Z Image Turbo, FLUX Dev, FLUX SRPO, FLUX Kontext, FLUX 2 - Full 4K step by step tutorial also published
Check above full 4K tutorial to learn more and see uncompressed original quality and size images
It was always wondered how much quality and speed difference exists between BF16, GGUF, FP8 Scaled and NVFP4 precisions. In this tutorial I have compared all these precision and quantization variants for both speed and quality. The results are pretty surprising. Moreover, we have developed and published NVFP4 model quant generator app and FP8 Scaled quant generator apps. The links of the apps are below if you want to use them. Furthermore, upgrading ComfyUI to CUDA 13 with properly compiled libraries is now very much recommended. We have observed some noticeable performance gains with CUDA 13. So for both SwarmUI and ComfyUI solo users, CUDA 13 ComfyUI is now recommended.
Finally NVFP4 models has arrived to ComfyUI thus SwarmUI with CUDA 13. NVFP4 models are literally 100%+ faster with minimal impact on quality. I have done grid quality comparison to show you the difference on FLUX 2, Z Image Turbo and FLUX 1 of NVFP4 versions. To make CUDA 13 work, I have compiled Flash Attention, Sage Attention & xFormers for both Windows and Linux with all of the CUDA archs to support literally all GPUs starting from GTX 1650 series, RTX 2000, 3000, 4000, 5000 series and more.
In this full tutorial, I will show you how to upgrade your ComfyUI and thus SwarmUI to use latest CUDA 13 with latest libraries and Torch 2.9.1. Moreover, our compiled libraries such as Sage Attention works with all models on all GPUs without generating black images or videos such as Qwen Image or Wan 2.2 models. Hopefully LTX 2 presets and tutorial coming soon too. Finally, I introduce a new private cloud GPU platform called as SimplePod like RunPod. This platform has all the features of RunPod same way but much faster and cheaper.
Qwen Image 2512 Text to Image model is a massive upgrade in quality just as Qwen Image Edit 2511 was for image editing tasks based on commands. I have trained both Qwen Image older Base, Qwen Image 2512 new base, Qwen Image 2509 older edit model and Qwen Image 2511 newer edit model and compared them in this video. The results are astonishing. Moreover, we have converted all of our premium SwarmUI image and video generation presets into ComfyUI workflows. Just drag and drop the workflow and start using immediately. All models are downloaded with our premium 1-click to use model downloader app. ComfyUI and SwarmUI are also installed with 1-click installers and our ComfyUI fully supports Sage Attention, Flash Attention, xFormers, Triton and more with RTX 5000 series and all GPUs like 3000, 2000, 4000 series, etc for both Linux and Windows.
Generating workflow inside SwarmUI and using in ComfyUI is literally 1-click. In this tutorial I will show you how to use our 40+ amazing generative AI presets made for SwarmUI in ComfyUI with most easy way. You will be able to get very best outcomes of all AI models such as SDXL, FLUX, Z Image Turbo, Wan 2.1, Wan 2.2, FLUX 2, Qwen Image, Qwen Image Edit, FLUX Kontext, Image Outpainting, Image Inpainting and many more. Moreover, I will show how to use custom model paths in ComfyUI and SwarmUI to unify your models in same folder and avoid model duplication and save massive amount of disk space.
Qwen Image Edit 2511 model just published and it is literally competing against Nano Banana Pro at image editing tasks. With native whopping 2560x2560 pixels image output capability and with only 12 steps it is next level. With our installers and specially made Quant FP8 Scaled model, you can run this amazing beast even as low as 6 GB GPUs. In this tutorial, I have compared Qwen Image Edit 2511 with previous successor model Qwen Image 2509 with 12 different unique and hard prompts and cases. Everything is step by step explained and provided.
Wan 2.2 Complete Training Tutorial - Text to Image, Text to Video, Image to Video, Windows & Cloud : https://youtu.be/ocEkhAsPOs4
Wan 2.2 training is now so easy. I have done over 64 different unique Wan 2.2 trainings to prepare the very best working training configurations for you. The configurations are fully working locally with as low as 6 GB GPUs. So you will be able to train your awesome Wan 2.2 image or video generation LoRAs on your Windows computer with easiness. Moreover, I have shown how to train on cloud platforms RunPod and Massed Compute so even if you have no GPU or you want faster training, you can train on cloud for very cheap prices fully privately.
Long waited Wan 2.2 training official configs published after massive research. For researching literally over 64 unique trainings made on a 8x B200 cloud machine. I thank a lot to Enverge AI for providing this research machine for free. They didn't even requested me to shout out for them.
Whisper-WebUI Premium - Ultra Fast and High Accuracy Speech to Text Transcripton App for All Languages - Windows, RunPod, Massed Compute 1-Click Installers - Supporting RTX 1000 to 5000 series
Z-Image Turbo LoRA training with Ostris AI Toolkit + Z-Image Turbo Fun Controlnet Union + 1-click to download and install the very best Z-Image Turbo presets. In this tutorial, I will explain how to setup Z-Image Turbo model properly in your local PC with SwarmUI and download models and use them with highest quality via ready presets. Moreover, I will show to install Z-Image Turbo Fun Controlnet Union to generate amazing quality images with ControlNet preprocessors. Furthermore, I will show how to 1-click install AI Toolkit from Ostris and train Z-Image Turbo model LoRAs with highest quality configs made for every GPU like 8 GB GPUs, 12 GB GPUs, 24 GB GPUs and so on. I did a massive research to prepare these Z-Image Turbo model training configurations.
FLUX 2 vs FLUX SRPO, New FLUX Training Kohya SS GUI Premium App With Presets & Features : https://youtu.be/RQHmyJVOHXo
FLUX 2 has been published and I have compared it to the very best FLUX base model known as FLUX SRPO. Moreover, we have updated our FLUX Training APP and presets to the next level. Massive speed up gaings with 0 quality loss and lots of new features. I will show all of the new features we have with new SECourses Kohya SS GUI Premium app and compare FLUX SRPO trained model results with FLUX 2.
⏱️ Video Chapters: 0:00 Introduction to New FLUX Training Improvements and Local Training Showcase 0:24 Understanding FLUX SRPO Model: High Realism with Minimal VRAM Requirements 0:38 Updated Configurations for Training Realism on 6GB VRAM GPUs Locally 1:07 FLUX 2 Announcement and Setting Up Comparisons with BFL Playground 1:45 FLUX 2 Dev Model Technical Specs: 32 Billion Parameters and Hardware Challenges 2:11 Overview of Changes in SECourses Premium Kohya Trainer Version 35 2:46 Development Updates: GUI Improvements and Full Torch Compile Support 3:13 LoRA Presets Update: VRAM Optimization and Speed Improvements via Torch Compile 3:27 Introducing On-the-Fly FP8 Scaled LoRA Training Support 3:42 Quality Comparison Analysis: BF16 vs FP8 Scaled Weights LoRA 4:24 VRAM Usage and Speed Analysis: Block Swap Count Reduction with FP8 Scaled ....
Next level Realism with Qwen Image is now possible after new realism LoRA workflow - Top images are new realism workflow - Bottom ones are older default - Full tutorial published - 4+4 Steps only
This is a full comprehensive step-by-step tutorial for how to train Qwen Image models. This tutorial covers how to do LoRA training and full Fine-Tuning / DreamBooth training on Qwen Image models. It covers both the Qwen Image base model and the Qwen Image Edit Plus 2509 model. This tutorial is the product of 21 days of full R&D, costing over $800 in cloud services to find the best configurations for training. Furthermore, we have developed an amazing, ultra-easy-to-use Gradio app to use the legendary Kohya Musubi Tuner trainer with ease. You will be able to train locally on your Windows computer with GPUs with as little as 6 GB of VRAM for both LoRA and Fine-Tuning.
Just did my research on the latest Kimi K2 thinking launch from Moonshot AI, and I strongly believe that this moment is a major inflection point in open agentic A ecosystem.
The breakthrough lies in test-time scaling, moving us past constrained generation to long-horizon problem solving agents. They’ve shown capacity for 200-300 sequential tool calling, with preservation of context and reasoning finely, alongside self correction over extended computation. And, remember, this is the worst it’s ever going to be.
We have demonstrably entered the phase of deep, structured cognition, and the ability to perform 23 interleaved reasoning steps to solve a phd-level math problem is a great demonstration of this cognitive depth.
Unsurprisingly, the SOTA benchmarks reinforce this reality. More crucially for the industry, this is an open-weights release.
Thanks to Moonshot team, for providing a new anchor point for the open-AI ecosystem.