I'm absolutely in love with SeedVR2 and the FP16 model. Honestly, it's the best upscaler I've ever used. It keeps the image exactly as it is. no weird artifacts, no distortion, nothing. Just super clean results.
I tried GGUF before, but it messed with the skin a lot. FP8 didn’t work for me either because it added those tiling grids to the image.
Since the models get downloaded directly through the workflow, you don’t have to grab anything manually. Just be aware that the first image will take a bit longer.
I'm just using the standard SeedVR2 workflow here, nothing fancy. I only added an extra node so I can upscale multiple images in a row.
The base image was generated with Z-Image, and I'm running this on a 5090, so I can’t say how well it performs on other GPUs. For me, it takes about 38 seconds to upscale an image.
Here’s the workflow:
Test image:
https://imgur.com/a/test-image-JZxyeGd
Model if you want to manually download it:
https://huggingface.co/numz/SeedVR2_comfyUI/blob/main/seedvr2_ema_7b_fp16.safetensors
Custom nodes:
for the vram cache nodes (It doesn't need to be installed, but I would recommend it, especially if you work in batches)
https://github.com/yolain/ComfyUI-Easy-Use.git
Seedvr2 Nodes
https://github.com/numz/ComfyUI-SeedVR2_VideoUpscaler.git
For the "imagelist_from_dir" node
Same, I found it in a workflow and I've been Frankensteining it into all my new and old workflows. Both on Z-Image and Illustrious. My favorite part of generating now is probably moving that compare slider. Surprisingly, it runs faster than Ultimate SD upscale with way better results. I have to use the tiles on a 12Gb 4070.
It adds detailed textures, fixes eyes, teeth and finer things like thin necklaces (wich are usually a mess before especially with sdx/illustrious)
I think we share the same passion for it. Sometimes I also just can't stop moving the slider back and forth for minutes ! With Ultimate Upscale, my skin always got bad, or my eyelashes got stuck together, or something like that.I definitely think it's better.
It often make me smile, but very often the effect it way to strong, giving flat plastic skin. Is there a setting I've missed to back it off? I often use the 3b model just to not get too much effect.
Perhaps blending in the original is a good way, are there other ways?
But when it works, like upscaling a dense forest, it's amazing the depth it creates, and it's very crisp. And it's done in seconds.
I use it a lot with ZIP, a great combo.
I got the best results with the 7b fp16, but it is vram heavy. Had the same issues with the gguf version. the skin was terrible with it and it is relatively gone with the 7b fp16.
one of the guys in the comments, told me about blending:
"Parallel processing is just mixing the original in with the enhanced version.
Use Krita. It's free and with the AI diffusion plugin, you can literally send the output of your workflow directly to a layer.
Then add the original image as another layer and just dial in the opacity to 5-10% or wherever starts to look the best.
It will soften the result of the upscaled layer a bit so you can lose the waxiness and other upscaling artifacts.
There are also plenty of blend modes that may or may not look better than just a normal opacity change, depends on the images.
Maybe you'll still feel the urge to process it more after that to get maximum sharpness, but I find the over-detailing to be a dead giveaway that it's not a real photo, even in the best works."
What is the maximum/recommended image size for SeedVR2 please?
idk sorry
It bricks my pc anytime i try to use it. And ive messed with the settings a bunch. Im on a 4080Super
Took me a while to find how to run it without crashing too. I'm using the photoflow wf from civitai. I set blocks to swap to 30 and enable the tiles option (and set them to 768). Theres an explanation in the wf.
Wow. Thank you. Ill have to give it a shot again.
Been experimenting with Ultimate SD Upscale since the glory days of A1111 but never really liked it. It's just too slow and annoying to use.
No tiling and even sdxl the backbone of ultimate is 6gb so you would be breaking an image into tiles the doing ultimate the. Restitching which doesn't look good ever. Seedvr can even do videos as it has a time dimension in it, shouldn't fuck much up because of that kind of thing.
Nice. I used SeedVR2 briefly and was kind of disappointed at the fuzziness. I assumed it was because it was meant for video(?). I'm clearly doing something wrong and need to take another peek at it. I honestly was getting decent results by letting Z-Image rip at higher resolutions and fixing borked details with ADetailer nodes. Of course this changes your entire image, though. Still figuring this out. Thanks for sharing!
It depends, SeedVR2 seems heavily optimized for European blue eyed woman. If you give it something different it's not as good. It makes the people I upscale all blue eyed which is funny.
faced the same issue in the HF spaces ..etc, running on Comfy it gives great results (better then supir) even for indian people where even a slight EU influence will wreck the person
It never gives blue eyes when I used it tbh, unless the eyes were already blue
https://preview.redd.it/1rbm9pshn96g1.png?width=2442&format=png&auto=webp&s=83a374e041820dffd9678bc98b1042c72e7b974d
You see it, right? I'm not going crazy, right?
you definiteley give it a second shot, it is worth it :) you are welcome!
SeedVR is absolute magic! Kudos to the developers! They made the new node super flexible and easy to set up. Memory management is very good, if you know what you're doing.
Indeed! May I ask if you've used Torch compile in SeedVR? I'd like to know if it makes a difference. Otherwise, do you have any tips on how to optimize performance?
Not using compile. It's pretty fast on 3090, around 20 seconds. I need to set up flash attention for further gains.
yeah I'm also using flash attention. it is pretty fast with it!
To me it boosts the contrast so much that the image looks way more AI generated than the low resolution input image.
I can't agree with that. That's not the case for me. The contrast is always the same.
Did you change any settings in the default node?
I can't remember anymore, since I set that up two weeks ago.
You could add the node again and compare to the default or screenshot your current node. I'd be happy to be wrong.
https://preview.redd.it/apxmg814466g1.png?width=2370&format=png&auto=webp&s=c350c625f449bf8663238e1268cf8b22b4c106a0
:)
Thanks, I'll try your settings first chance I get
I really have no idea how well my settings work on other gpu's besides my 5090. I didn't design it for low vram or anything like that. If this concerns you. But you're welcome to let me know if it works for you.
Got a 5090 also
great for seedvr!
Which GPU are you using?
Whats it like with SageAttn?
idk if it‘s supported. I also just know about flash
What tool is this?
seedvr2 upscaler
I tried Video Upscaling with it but I quickly realized I’d need datacenter class gpus for it. But Image upscaling was quite good.
You just need to use batches that your hardware can accommodate. Unfortunately the process isn't automatic, you need to figure it out
Yep. I upscaled a 1:30 Minute video from 640x360 to about 1400x900 (something like that, don't remember exactly) and it took about 1,5hrs on an RTX3090
That's pretty much an episode of Xena or some other old show each night? 20m at 1.5 hours per min is like 13.3h.
Guess Xena would be more like over 24h, ah well wish it was faster haha
I'm not at that point yet and have only been able to test it with images so far. But I can imagine that it will reach its limits there.
Tried it this weekend:
upscaling a 1min 720p video to FHD took ~3-4hrs (didnt time it) on a 5070ti. batch size was 33.
Results where underwhelming to ok-ish, but maybe that was because the source material was really bad.
3-4hrs is hard...
it is, esp compared how fast it is with single images.
i might try it again, but then i will start it before i leave for work
That sounds almost sad for one video. I mean, at least the result has to be really good
it kind of is...
Doesn't sound very convincing ^^
maybe my post wasnt clear:
got it bro!
From what I saw, someone was able to do short little videos about wan length with it using all the block offload and a 3090. it balloons for sure though.
Love it too, however I am struggling with going beyond like 1536x res on my 5080. Any tips here?
Try the tiling upscaler node
https://github.com/moonwhaler/comfyui-seedvr2-tilingupscaler
i tried upscaling images up to 10k (could probably go even higher but preview gets jittery lol) without issues
*i have a 5070TI 16GB VRAM
can you show your settlings? i never could figure out how to get the tiling one to work.
sure, here's the workflow. Make sure to install the seedvr tiling node via git pull, not the manager.
https://pastebin.com/LrQ7bKWw
i've looked all over for this node, and i cant find it. even comfy isn't popping it up. can you post the link for it? much appresh mate.
it's in my previous reply...
https://github.com/moonwhaler/comfyui-seedvr2-tilingupscaler
So it's the same I was using. Man, i had the worse time updating this. Apparently, I'm using an older version, and the nodes for SEEDVR2 didnt want to show up. i had to not only git clone, but overwrite the existing folder or it would just be missing nodes as well. christ. thank you for helping me notice this. i wonder what other ones my comfy isn't properly updating....
haha yeah it's a bit of a mess, i was going through the same. I think most nodes should do fine, the update from the tiling upscaler just came out yesterday and maybe didn't get pushed properly to the manager list. Either way, glad it worked out, have fun testing.
you mean 1536 just at seedvr? There should definitely be more potential with a 5080. I mean, you can easily generate a 2000x2000px image with z-img and then test scaling it to 3000x3000px in SeedVR. How much VRAM does your 5080 have?
16gb VRAM.
i figured out what the issue was. My workflow wasn't really optimized. Tiled vae was disabled.
I noticed that your workflow does not include join image with alpha, so it doesn't support images with alpha channels.
Another thing I want to point out is that I noticed I get 2-3x better results if I upscale anywhere up to 1.5 to 3x the input resolution and just upscale that again. instead of just going straight to 4k. Especially true if you upscale small images.Though this might only be true to my specific images that I upscale
[deleted]
That's an amazingly clean workflow. Congrats! So you upscale 2.5x and 3x again? That's huge... (If you like to share that workflow that would be awesome...)
Thanks, I really appreciate it! It still looks a bit chaotic to me. I can't share the workflow yet because I still need to fine-tune things. 8 out of 10 images come out perfectly clean and sharp. With 2 out of 10, the skin texture is terrible. I'd like to release it when it works reliably.
Yes, exactly, I'm using two latent upscalers. The problem is, if you only use one, you have to set the denoise to 0.70 to avoid artefacts, but then you lose consistency. It changes the composition.
To prevent that, I've added another upscaler in between to smoothly scale everything up. I start with a resolution of 224x224 pixels. In the end, I get a 4000x4000 pixel photo. The cool thing is, it actually runs through on a 5090 in 65 seconds.
Everything has been tested for sweet spots. One more step in the second upscaler and the skin becomes too rough. The big benefit of all this is that you can control the image generation even more precisely.
I too would be interessted in the workflow, to see which nodes you use for prompt generation mainly ;)
Primarily standard samplers, plus latent upscale nodes and, in between, 2x (1x skindetail) upscale models. There's also a detail daemon, which could be omitted. It just adds more detail to the image.
yep, by far the best upscaling model i've tested.
feels almost like magic.
That also reflects my experience! magic... indeed. I could spend hours playing around with the compare slider after upscaling with seedvr2
This is to make a good image better right? Im trying on not good images to start and maybe im missing some parameter or somehting, final result has a lot of pixel noise and oversharpening
use the base fp16 model, not the "sharp" one.
I did, in full weights too, but the problem was hard to fix, it seems playing around with the noise values (both of them), which are extremely finicky, results start improving.
Its the fault of the initial image having iso noise I think
I've noticed that this Seedvr2 works good for screenshots from videos (so frames), but bad for photos. It's interesting because StableSR does the exact opposite - works good on static photos, but bad on frames
https://preview.redd.it/2gttpgb1v66g1.png?width=2553&format=png&auto=webp&s=b68f735ed0462f0999ab25ffc6e9bb72ed4cbeef
I think it really depends on the input size? My images all go in at 1736x1736px and come out perfectly clean at 4000x4000px
https://preview.redd.it/v4k9t14fv66g1.png?width=1677&format=png&auto=webp&s=5810e50fa3298d40b75761c62d0c314368fb4a18
Hm, so it also works for some photos too.
Btw I've noticed that this lizard skin effect removes well by simple decreasing image opacity to around 0.55
sounds interesting! Which node are you using for opacity? I'd like to try it right away. Thanks for the tip!
ImageBlend, where blend_factor is opacity, image1 is upscaled, image2 is not. It's better to save this low opacity image variant as well just in case of unexpected lizard invasion
so 1 upscaled
and 2 "before" upscaling? or nothing?
factor to 0.55?
https://preview.redd.it/0pcw8608276g1.png?width=607&format=png&auto=webp&s=bece9a07ed365638576027982da074c5cbf6e843
yes
Do I need to increase or decrease the blend factor if I want more texture? It seems to work really well!
unfortunately it looks like there is no way to use StableSR in ComfyUI on medvram gpus, because tiled diffusion for comfy ui doesn't support stablesr :(
Definitely suffering from the waxy skin, but a lot less so than other things that give good detail like gonzalomo.
Even if you just parallel blend it back into a layer of the original image by <10% it probably gets rid of that and would look indistinguishable from reality.
Someone here knows their stuff. Would you mind sharing a short workflow with me on how you use parallel blending?I don't know that. Thank you for your constructive feedback!
Parallel processing is just mixing the original in with the enhanced version.
Use Krita. It's free and with the AI diffusion plugin, you can literally send the output of your workflow directly to a layer.
Then add the original image as another layer and just dial in the opacity to 5-10% or wherever starts to look the best.
It will soften the result of the upscaled layer a bit so you can lose the waxiness and other upscaling artifacts.
There are also plenty of blend modes that may or may not look better than just a normal opacity change, depends on the images.
Maybe you'll still feel the urge to process it more after that to get maximum sharpness, but I find the over-detailing to be a dead giveaway that it's not a real photo, even in the best works.
That sounds really good! I'll definitely try it out! Thank you so much for the detailed answer!
This seems really amazing but would I have any use for it? I've only got like 12gb vRAM. What kinda speed is an upscale goo? In kinda partial to my ultimate SD upscale
It is working fine with my 3060 12GB and 64GB of RAM. I tweaked settings down a bit 1536 or 2048 for res, but I am still getting a really nice improvement. The left side is the upscale.
https://preview.redd.it/j15t777yh66g1.png?width=1431&format=png&auto=webp&s=138bdb5820b32bf7c49a0cbe7424f16a889a0cd5
Unfortunately, I can't tell you that, as I was only able to test it on a 5090. Someone else posted this here:
https://github.com/moonwhaler/comfyui-seedvr2-tilingupscaler
Perhaps this will be more efficient if you can't get it to work with mine.
For me its slightly faster than ult sd and way better results. On a 12 Gb 4070.
Thank you!
kewl thanks for the workflow too
you're welcome!
Does the workflow download things automatically or is it the SeedVR2 node that downloads its model?
you have to install the seedvr2 custom nodes. The workflow includes a downloader node which automatically loads the respective model the first time.
I found it more viable to use SeedVR to upscale and then do a second pass on a scaled down image to clean up the details. SeedVR is extremely impressive and works really well, but it definitely benefits from some cleaning up.
indeed! sounds interesting and makes sense!
thanks! its very useful
you're welcome! Thank you!
Does this work with forge?
idk...
prepping up source image by adding a bit of blur/noise does wonders.
what do you mean by that?
https://preview.redd.it/zi6qw69wv66g1.png?width=540&format=png&auto=webp&s=b87e094e6b17b0b00ffa58b39fc53c9cfb294748
Currently, I'm using the upscale model before scaling up. It makes a huge difference. But I haven't found anything better yet. Do you mean I should do a noise injection beforehand?
https://preview.redd.it/ugmscxi1b86g1.png?width=2069&format=png&auto=webp&s=23f2aed6263f2ccd0605e28913caea33599913b5
this works best for me.
i use blur node ~80% of time. sometimes "add noise" works well.
my WF:
1. Start with low quality image. Direct upscale to 2 MP (1-2 MP) with workflow above.
2. Upscale again to 6-9 MP depending on if your graphics card can handle direct upscale.
Note:
a. blur node ON in both steps.
b. use tiled-upscale if need to go higher resolutions
Thanks for sharing! I'll test it out tomorrow. It definitely looks good!
https://preview.redd.it/dqlmval6q76g1.png?width=2495&format=png&auto=webp&s=0ac57a0f00e9de6f30566c1d3fa345ef5d8c8d6e
Same for me, seedvr2 is really damn good
Great feedback, nice to hear that! Indeed, it is really good. It doesn't seem to read anything into anything. It's my new favorite tool.
*Cries in Rtx 3080 10gb*
Oh no, my condolences...
What resolution are you upscaling to? Sometimes my needs go beyond 10k resolution...would be great if this can hit those numbers. Currently running stuff through Magnific since getting images to 10k, and looking non AI is a slow challenge on a 3090 GPU.
I just tested it to 6000x6000px not further. But if you test it, feel free to let me know whether it works well or not at all for such high resolutions and would you mind sharing a 10k image? I'd like to see it.
So far its just abstract art stuff I think the missus might like printed around the house, but Im planning for the eventuality she asks for something in A0 size, so I need to have the master file as big as possible.
I havnt got any images of real people with photo-real skin pores at 10k.
I do create high res automotive VFX work for my job, so will be looking more into the photo real side of things gradually as it seems not using some form of AI will see VFX artists get sidelined for people who have flexible workflows.
I totally get that feeling; when an upscaler works so well, it’s like you’ve stumbled upon a secret cheat code for creativity.
Haha indeed. It's always fun to see how the compare slider transforms the image into something super beautiful :)
Does SeedVR2 use image's prompt for upscaling?
nope, just run :)
Interesting, I thought upscalers which could use prompt would have an advantage over other upscalers. Apparently that's no longer true.
Does SeedVR2 use some new "technology"?
It seems to be working very well! Idk what they're using for it. They have some youtube videos with a lot of infos about it
Ok, ty.
you're welcome!
Does anyone managed to run it on Mac?
I didn't find it "leaving the image as it is" to be true at all. It changed the facial appearance of people in my images.
that isn't normal if it changes anything. I rendered a lot of images with seedvr2. not one image has changed through this process
I grabbed your workflow and gave it a spin - it seems to work really well from my initial test, thanks for sharing that. Nice that it also auto-downloads the model.
Do we actually need the CPU offloading? I have a RTX3090.
You're welcome! Thanks for the feedback! Just test it if it goes OOM if you disable it. You could also test to disable the tiling. But it depends also on your target resolution
Which one are you using? https://huggingface.co/SassyDiffusion/SeedVR2-7B_BF16/tree/main
https://huggingface.co/numz/SeedVR2_comfyUI/blob/main/seedvr2_ema_7b_fp16.safetensors
I’ll try it. I’ve been using Wan as my go to upscaler after being greatly disappointed by Ultimate SD.
yep, that disappointed me somewhat as well. Either I'm too stupid for it, or it's just not good.
Thoughts on standard vs “sharp” version of the model?
Yep, I tested it. And the sharp is a bit too strong, at least for realistic skin. The FP16 non-sharp version is exactly the sweet spot for it. I don't know if the sharp one might be better suited for anime or other things.
i was just looking for an upscaler, thnanks
you're welcome!
who is making an actual API on all these comfyUI shit
Interesting that every one talks about the vram and nobody mention the RAM needed for vr2 to work
To be honest, I hadn't given it any thought that it would work without any problems for me. What do you know about it?
SeedVR2 is amazing. I just wish it was better with NSFW body parts. It definitely has issues with female genitalia and often changes it with wierd textures
Good to know. It seems to work very well with SFW. Out of over 2000 photos this week, not a single one had artifacts or other errors due to SeedVR.
There are 3 different FP16 models for SeedVR2. Which one are you using exactly here?
seedvr2 ema 7b fp16.safetensors
https://huggingface.co/numz/SeedVR2_comfyUI/blob/main/seedvr2_ema_7b_fp16.safetensors
put the tech to good use and upscale that hamster in its glass cage that is scared to death and makes a funny plushy-like face
https://preview.redd.it/ngkrxo7da76g1.png?width=1512&format=png&auto=webp&s=fe6a2aeef4ff4d9e04bcb051ea3999a4cc475d4a
Ask and you shall receive
https://i.kym-cdn.com/entries/icons/original/000/027/916/hamster.jpg hm something is diff
hehe
Man, this is amazing, thank you!
But i have a doubt, (i have triton installed) but shouldn't i see triton on the option attention_mode?
https://preview.redd.it/qmtu4hnk876g1.png?width=483&format=png&auto=webp&s=53a18eda7a4e202c15a901f01ff71e16903cee79
thank you bro! I really appreciate it! I think not. I definitely have it installed and just these two options
https://preview.redd.it/vkzn4h0ra76g1.png?width=811&format=png&auto=webp&s=3c0bbebee3b4091e3d5eefb7a0ecb81ad8dbcef1
https://preview.redd.it/t8n0e529b76g1.png?width=498&format=png&auto=webp&s=7ec1181fb6fc2d99c5259058b4d4c51d3c6c8a9a
this frog looks cool :)
it's funny...i like many others gave up on seedvr2 trying to use it for videos just because it's so vram hungry. never though about for upscaling images though. reminds me of people using wan 2.2 text2img which kinda messed with my brain for a bit lol
hehe, indeed. It is next level for images if you are using the right model. I actually read somewhere that wan was originally intended to be an image model, or at least designed to be one. I could be wrong, though.
Tried that, and it f#cking works! Fantastic! Thank you very much!
haha, thanks for your feedback! ur welcome
https://preview.redd.it/v61g34ufp76g1.png?width=1088&format=png&auto=webp&s=cc73032b115bb78eb2f95162c71727cb499c159d
looks great! Did you use the 7b fp16 model?
yes
https://preview.redd.it/hs9tvwd2pb6g1.png?width=2524&format=png&auto=webp&s=0e6828fc2779d7f305c84e0c976040c1452704d1
sooooo clean my friend! Incredible result!
This is to make a good image better right? Im trying on not good smaller images or crops from higher res images to start and maybe im missing some parameter or somehting, final result has a lot of pixel noise and oversharpening
I think it depends on the input resolution. So far, I've only upscaled images with a resolution of 1736x1736 pixels. The quality should be significantly improved, without noise or anything like that. Of course, it also depends on which model you're using.
I’m using flashVSR, I vibe coded with Codex to make it work on my rtx 2060 6gb vram.
This is the repo I modified
https://github.com/lihaoyun6/ComfyUI-FlashVSR_Ultra_Fast
I’ll probably create a fork if someone is interested
Lol I was waiting for jank teeth or something comical🤣
😂
I'm sorry, but what do I do with this workflow? Where do I put it?
just download it and move it into comfyui
Seedvr2 just doesn't work on my 12gb vram
Is there any way of getting to just upscale the original image by X (2x, 3x)? I was trying to use the multiple images section but as my images are not a consistent aspect ratio, setting the resolution did not work as well. Ultimate SD Upscaler has an 'upscale by' line on its node which does the trick.
maybe resize before upscaling it with seedvr? Otherwise, I can't think of anything else, as I've only used it with fixed resolutions so far.
Can it upscale NSFW stuff?
Idk just try it :)
Has anyone tried it for artwork / stylized images? How did it perform?
would be interesting since several people have already asked
Sadly, none of them are really good. I have tried over 50+ upscalers to upscale some old pictures of myself, and they all ended up looking nothing like me. They are great for generic photos of people you do not know, or for scenery, or objects, but when it comes to personal photos, do not expect the best quality, you will probably end up disappointed.
I dont get it, its just a face shot, even SDXL can do amazing face shot upscales. The real magic is in full body shots, show me the same results with a full body shot and I will be impressed.
ok I will check it later
just so you know, you said you had a 5090 and that workflow you shared is set to use CPU
no it is just the offload device if your vram is full
you should also see the cuda device in the other options
Holy shit! OP gave the workflow!!
haha :)
Anyone have good learning resources for, like, what I'm supposed to do with the pastebin code or like how to use any of the stuff OP linked?
start using gpt. I learned most of it with gpt, reddit, youtube, doing stuff and failing in comfy, you can grab a grab a gpt plus account for $1 on z2u c o m
I'm using comfy and getting decent images but only understand basic workflow. I've seen people using all kinds of extra nodes and while I did manage to install the node manager I can't seem to download or even find many that I see. I also need to learn how things like inpaint and how to do a good image+text to image prompt, etc. I've been using copilot which is chat gpt but it's not great. Tried making prompts for z turbo that were okay but needed a lot of tweaking, and when I asked it about ksampler settings it was pretty wrong.
It's definitely true that you can't always rely on gpt or anything like that. There's a lot of misinformation about it, but it also contains a lot of useful information. Especially when it comes to fixing the setup, installing custom nodes, and so on. He's very precise in that regard. If you ask him what a ksampler is, what it does, and what happens when you change the settings, he's quite accurate. It's more a issue about the exact settings or building workflows, though. I've often had less luck with that. You should really start with simple basic workflows (also in comfy "templates) and gradually try to implement more and more features. Download not too complex workflows and just experiment. Change settings and keep observing how the image changes.
I've generated tens of thousands of images. Often for days on end, just sitting there, changing the cfg value by 0.1, clicking run, and doing that with all sorts of settings, then comparing the images to see what's changed. That's the process, if you want to fully understand it.
a very good source of videos is pixorama on youtube. it is straighforward very structured step by step. Just use his workflows and start watching his videos. hope that helps! :)
I was whole-heartedly ready to come in here and tell you to fuck right back off but god damn. That actually is super impressive
hehe, glad to hear that! Thank you!
Dude I just tried it in my workflow, its great. Seems to be working a lot better these days than previously.
Any reason why you use the video one over the image node?
Thank you for your feedback! I don't get your question. what do you exactly mean by that ?:)
Thanks for sharing!
Know any but for videos?
seedvr is for videos, but topaz should be far better for that
The generated images are all great, except the skin looks a bit strange
https://preview.redd.it/n3o8dtm27c6g1.png?width=1528&format=png&auto=webp&s=66985e36deccc38a2d754789a4c42889445de7c0
Looks super crisp. I'll get this skin if I'm using any upscale model like the 1xskindetail before the seedvr upscaling. I think something before the actual upscaling process triggers this
Is there a gradio version?
idk. I'm already into making a own gradio one with the cli version.
It sometimes hallucinate stuff it wasn't trained on, but it's pretty good for images. I was surprised though how bad it looked for videos though, considering it was designed for this. Temporal stability is not great, not to mention how long it takes.
While closed source and an absoluty shitshow of a software on various levels, Topaz VAI is still unbeatable in video upscaling sadly.
I'm actually quite curious to test different video upscalers. I'm not very familiar with them yet. But it's interesting that it works so well for images, but messes up videos.As you said, it's primarily a video upscaler.Can Topaz be roughly compared to what I showed above with the slider? I mean in videos