This is due to the UK’s Online Safety Act (OSA), which imposes strict legal requirements on all platforms with user-generated content. These include biometric age checks, complex legal risk assessments, and personal liability for staff. These rules apply even to platforms based outside the UK.
So rather than comply with the UK's draconian policies, they just noped out.
The UK thing is a bit different as it's due to their age verification requirements.
Very few websites block EU users except from those providing services for a limited amount of countries (mostly just US ones). Home Depot is one example which have pretty much nothing to gain from EU users.
Given the state of the world currently (governments and corporations constantly spying on us), it makes no sense to use the internet without a VPN.
A basic VPN is very cheap. I use Private Internet Access, which is a basic consumer VPN, which only costs £1.67/mo. Alternatively you can temporarily use a free VPN or TOR. I wouldn't recommend using a free VPN long term though.
In today's world where everyone has access to full information about all the negative effects of smoking, it is not just bad, but one of the most idiotic things a non suicidal living being can do. :)
Not true. Doctors and nurses receive a $5 bonus for every lung cancer diagnosis they give out. And they add a medium pizza if it's considered a cause of death. There's never been any concrete evidence that it even exists.
if you frame dying as a good outcome, every action that kills you will be framed as good. If you think of this person as someone with loving family and friends, and happy with their live, it does not strike me as a good outcome.
Problem is, second-hand smoke is awful for anyone around you. And even 3rd hand smoke has been shown to be pretty bad (grandma smoke, mom spends the day at grandmas, comes home with smoke on her shirt and holds her baby).
Smoking isn’t self-destructive unless you take extreme caution to keep it that way, which I’m going to guess literally nobody ever has done.
Op said they are using a workflow from YouTube. I would imaging it is an i2v workflow that generates multiple short clips and then stitches them together for one video.
Not sure what this workflow does but you could then use something like FlashVSR to upscale to HD at the end.
Could even have it all setup in one shot. Prompt the Z image turbo gen, that gets passed to wan, that video gets the last frame used for the next gen, does that a few times, splices video together, upscale. Then boom there is your 12 second video. If it was me... Three 4 second videos would do it. Get it done in about 15 minutes with sage and triton. Maybe less.
I’m new to all this, but is there a way for a noob to use z-image on an AMD system? I recently got a strix halo system and I’d love to have a play but it seems like a minefield
Yes, but setup was confusing for me as a noob as there wasn't a perfect guide. I have a Ryzen 1800x, Radeon 6900xt, 16gb ram. I had to install Linux because windows support for ROCM is bad on an older card like this, according to the guide I found. I can generate images in 22 seconds with the default setup, but offload the vae decode to my CPU. Overall time is about 50 seconds per image. When I don't offload to my CPU it errors out because of memory issues randomly, but the total time goes down to about 30-35 seconds.
I've got a laptop 3060 with 6GB vram and can run z-image with decent gen times. (Decent for being low end). Probably the best quality I can get locally.
this has audio integrated
no idea if its gonna save all my NSFW stuff but.. u can delete all that
you can disconnect the audio on the right if you want. and i have an image loader that loads images from a folder. you dont need that. you can do it with that initial image node.
Looks intimidating but, not a ton you have to do.
this is i2v
and like i said also has audio included
so yeah. i hope it works for you. my videos are 800x600 and take just around 100 seconds right now.
Edit: Yeah idk if it does but that might come with an NSFW image. be warned.
??? How are you getting 30 sec gens on Z Image? I have a 3060 ti and the full fp16 model takes 300+ seconds for 9 steps. I tried the Q4 GGUF and it still took 160 seconds, 9 steps.
I suppose they could have used z-image per individual image generation, batch processed while applying some means for character consistency, and then stitching the results together.
Makes no difference to me. It’s just one idea/theory to turn a sequence of stills into animation. We’re all so used to seeing Wan workflows that we just come to assume every video is now using that.
I am still learning ComfyUI and literally have the same GPU as you, so seeing this makes me extremely curious on what I can create off of my setup. I would seriously appreciate it.
Just tested your exact settings on my 3060 12 GB (driver 566.03 + torch 2.5.0 cuda 12.1) and I’m getting the same 28-32 sec per 512×768 frame with zero VRAM overflow.
The key was dropping the cache to CPU at frame 12 like you did + using –medvram-sdxl flag combined with the new tiled VAE decode.
For anyone still hitting OOM: swap to xformers 0.0.28 instead of the built-in torch SDP; drops another 1.8 GB and keeps the same quality.
30 sec per frame on a 3060 is actually insane for full Z-Image flux pipeline right now. Huge props for sharing the exact command line.
https://i.redd.it/nbpwvp7zv56g1.gif
Bam jongen!
Bad trigger discipline
how did you get that handheld motion pls?
Hold on, this is crazy....
She smokes that cigarette as Thomas the tank Engine, very sexy!
I think she might actually be the diesel.
Amazing, can u share the wan workflow?
Hello, sorry for the late response. I got the WF from YouTube - here's the link
Z-Image: https://huggingface.co/vantagewithai/Z-Image-Turbo-GGUF/blob/main/example_workflow/Vantage-Z-Image-Turbo.json
WAN: https://civitai.com/models/1852904/wan-22-workflow-optimized-for-rtx-3060-12-gb-vram-gpu
Ty. Can u Name the yt video for the wf?
We have the same avatar twin
Can anyone share this on another site for those of us in the UK where Civ is blocked :(
Just get VPN from browser.
uk blocked civitai ? man uk is becoming like north korea lol
UK did not block civitai, civitai blocked UK
To be fair this wasn't a spiteful decision.
This is due to the UK’s Online Safety Act (OSA), which imposes strict legal requirements on all platforms with user-generated content. These include biometric age checks, complex legal risk assessments, and personal liability for staff. These rules apply even to platforms based outside the UK.
So rather than comply with the UK's draconian policies, they just noped out.
Yes. Blocking the UK and EU is a common option if the site isn't so much profitable from users from there. Too strict and too risky.
The UK thing is a bit different as it's due to their age verification requirements.
Very few websites block EU users except from those providing services for a limited amount of countries (mostly just US ones). Home Depot is one example which have pretty much nothing to gain from EU users.
loads Austin Powers sdxl lora from og civit oh behaaaave baby yeaaa
Wankers
Only with the correct license!
we don't need no stinking badg...oh wait no yea good call
ProtonVPN is free
Why is civ blocked in uk?
Given the state of the world currently (governments and corporations constantly spying on us), it makes no sense to use the internet without a VPN.
A basic VPN is very cheap. I use Private Internet Access, which is a basic consumer VPN, which only costs £1.67/mo. Alternatively you can temporarily use a free VPN or TOR. I wouldn't recommend using a free VPN long term though.
do you have the youtube link plz?
How does this work? Run the wan workflow or the z image one or both in an order?
Thanks
Saved <3
Very nice.
Epic thank you
Girl breathing fire like a dragon, jesus what are these cigs made of?
Like less than half a second puff for that much smoke😵 it looks more like vapour from a vape.
That this is the problem we’re noticing is amazing, btw.
She would have had three arms and four hands a couple years ago.
Yes, its crazy to think about it
Real men skip the tobacco and smoke tar directly
Lol yeah for a split second drag there was enough smoke to fill a car.
Hotbox every day
Looks like vape smoke.
30 seconds is nice for that card. What workflow are ya using?
How much faster would it be on a 3090ti?
Under 10 seconds, I have one as well
Wait, it’s under 10 seconds for the whole video??
At least Half if not quarter of the time.
GJ confusing everyone here OP lol...
You did NOT generate a video on a 3060 in half minute
30 sec for image. Video not mentioned
But what would be remarkable generating an image in 30 seconds?
That's an easy question to answer.
That is what's remarkable! The White House uses this very tried and true technique.
Yup 2020-2024 it seemed to work well for them.
The post is a literal video
Yep, clickbait post. It's weird that people upvote this.
But z-image doesnt make video. He says z-image 30sec
Correct
30 sec per frame?
For the init to generate in z-image
I am missing something. Why is it interesting to generate an image in 30 seconds? That seems slow.
Idk I'm just answering the obvious. Idk that it's interesting but on a 3060 I'm guessing that is noticeably faster than Flux/Chroma/Wan t2i
It's fast for a 3060 on a modern high quality model
Ha ha you were wrong and now you're trying to play it off 🤣
Correct
With words.
do you think that a 5070 TI would be able to? getting one soon for gaming and curious about how good it’d generate videos
It will be pretty fast but not under 30s for ZIT image + Wan video at a decent resolution/length, not even a 5090 can do that
Well it would take about 110-150 seconds for a 416x752 at 24 frames for a 6 seconds video from experience
smoking is bad
Snow White trash
Trailer White
Triggered
In today's world where everyone has access to full information about all the negative effects of smoking, it is not just bad, but one of the most idiotic things a non suicidal living being can do. :)
And it smells terrible, and yes, everyone knows you smoke if you smoke. You cannot hide it.
What that is true of "living beings", non-living beings are even less suicidal.
But it looks so damn cool.
but it's bad for health
Name one bad thing that has happened from smoking... I'll wait.
Lung cancer? Reduced lung capacity
Not true. Doctors and nurses receive a $5 bonus for every lung cancer diagnosis they give out. And they add a medium pizza if it's considered a cause of death. There's never been any concrete evidence that it even exists.
Both things that probably happen anyways, smokers just die b4 this planet goes into the shitter
if you frame dying as a good outcome, every action that kills you will be framed as good. If you think of this person as someone with loving family and friends, and happy with their live, it does not strike me as a good outcome.
Ik just being edgy. But tbf in my opinion everyone should have the right to indulge in self destroying behavior as long as its fun. My body my choice
Problem is, second-hand smoke is awful for anyone around you. And even 3rd hand smoke has been shown to be pretty bad (grandma smoke, mom spends the day at grandmas, comes home with smoke on her shirt and holds her baby).
Smoking isn’t self-destructive unless you take extreme caution to keep it that way, which I’m going to guess literally nobody ever has done.
30 seconds for video I’m impressed
I think he means just 30 seconds for generating 1 image on Z. It could take him at least 5 minutes for the video.
I know because I have a 3060 as well.
Yeah, no way they meant the video. For 30 seconds of video on my 5070 Ti you'd be looking at like 10 mins?
40-50 seconds per image on my 3060 12gb. 1440x1440 resolution.
Wouldn't 5 minutes be 10 frames based on this calculation?
Read carefully. 30 seconds for z image.
Did you just ask a Redditor to actually read a whole post? Lol
Tldr
Literally the post title. I guess some people tik tok brain and can't even focus for one second.
But how long for z video?
No z video. OP use wan to generate video. OP does not mention anything about video generation time.
(it was a joke. Say your comment then mine out loud.)
I know just jesting I have a 4090 video takes to long
I dont think its possible to do 30 sec video with that quaiity on 3060
It's also not possible to make videos with z-image. That part is obviously done using a different model.
the caption literally says WAN for video
Shh, let them figure it out on their own!
I don't think it's possible to make videos with Z-image either xD
lol 🤣 yeah that one too... Maybe he meant 30sec image using Z-image
do you think it’d be with a 5070 Ti? Getting one for gaming and wondering how good it’d be with AI
can't wait for the wf, im on 3060 too
Image for 30 seconds, video minimum of 30 mins I guess.
5-6 minutes for video
How? Wan2GP?
I'm on 3060 so I use lighting lora and 6 steps. 32gb ram as well. @ 480x480 and 640x480. Wan 2.2
Cool. The video from OP looks high res and it’s at 12 secs so I was thinking maybe it’s longer? But thanks for sharing your settings!
Op said they are using a workflow from YouTube. I would imaging it is an i2v workflow that generates multiple short clips and then stitches them together for one video.
Not sure what this workflow does but you could then use something like FlashVSR to upscale to HD at the end.
Could even have it all setup in one shot. Prompt the Z image turbo gen, that gets passed to wan, that video gets the last frame used for the next gen, does that a few times, splices video together, upscale. Then boom there is your 12 second video. If it was me... Three 4 second videos would do it. Get it done in about 15 minutes with sage and triton. Maybe less.
Use the workflow OP posted. 10 minutes with a 3060 and 32gb ram for three second videos 60fps
You can do Wan on 3060?
I’m new to all this, but is there a way for a noob to use z-image on an AMD system? I recently got a strix halo system and I’d love to have a play but it seems like a minefield
Yes, but setup was confusing for me as a noob as there wasn't a perfect guide. I have a Ryzen 1800x, Radeon 6900xt, 16gb ram. I had to install Linux because windows support for ROCM is bad on an older card like this, according to the guide I found. I can generate images in 22 seconds with the default setup, but offload the vae decode to my CPU. Overall time is about 50 seconds per image. When I don't offload to my CPU it errors out because of memory issues randomly, but the total time goes down to about 30-35 seconds.
Unlikely. AMD is not geared to AI at all. You will need Nvidea, a 3060 with 12GB minimum today.
No, it works. Flux also works on AMD.
They might work, but nowhere as well as using Nvidea cards.
I've got a laptop 3060 with 6GB vram and can run z-image with decent gen times. (Decent for being low end). Probably the best quality I can get locally.
For which gpu? The default workflow works. Where are you facing an issue?
Workflow or it didn’t happen.
If I had a Dollar for every workflow that was shared, I'd have 2 dollars.
which isn't much, but it's impressive it happened twice.
im using 3060 too but cant run wan 2.2, are you using wan2.1 but i never get good output from it?
Just use the default template and replace the 14B diffusion models with gguf Q4. You need to use the UNet Loader node.
Use fp8 high and low models
Yeah I had to check comments lol. My 5090 generations are ~90-100 seconds for 5 second video.. I saw 30 seconds and was stunned
I can imagine the image was generated that fast lol. Video? Idk about that.
could you share which workflow you use ? My 5090 takes like 280 seconds for a 640x640 5 sec video.
https://drive.google.com/file/d/1OBJC6ONN-cYaPZy6i2C7Eu0IvFQf8jOS/view?usp=drive_link
this has audio integrated
no idea if its gonna save all my NSFW stuff but.. u can delete all that
you can disconnect the audio on the right if you want. and i have an image loader that loads images from a folder. you dont need that. you can do it with that initial image node.
Looks intimidating but, not a ton you have to do.
this is i2v
and like i said also has audio included
so yeah. i hope it works for you. my videos are 800x600 and take just around 100 seconds right now.
Edit: Yeah idk if it does but that might come with an NSFW image. be warned.
Thanks for sharing, I will check it out when I get home. I also appreciate the nsfw warning :)
I didn’t know people were getting 5 second generations that quickly, crazy
Hello, sorry I just asked for access without properly asking here first, hope you don't mind. Thanks for sharing.
its funny. I actually had the wrong link posted. I posted some of my school work lol wtf.
But I updated the link. it should be good now.
Oh god, glad it was protected heh, thank you very much, I'm testing the wf right now!
I hope it works well for you!
Is this wan 2.2?
How much RAM, mate?
Wow I really need to switch to comfyui huh
Is that WAN on the 3060 as well? Is the 30 second gen just for the image or for the video?
??? How are you getting 30 sec gens on Z Image? I have a 3060 ti and the full fp16 model takes 300+ seconds for 9 steps. I tried the Q4 GGUF and it still took 160 seconds, 9 steps.
30 sec for what? I have 3060 too - nothing close even for a single image :)
I can do about 30 seconds per 1024px image on a 3060 12Gb. Latest Comfy and Triton installed.
What's Triton?
Triton (OpenAI's 'Triton for Windows') allows kernels to be GPU‑accelerated on your PC.
I suppose they could have used z-image per individual image generation, batch processed while applying some means for character consistency, and then stitching the results together.
lol. copium.
Makes no difference to me. It’s just one idea/theory to turn a sequence of stills into animation. We’re all so used to seeing Wan workflows that we just come to assume every video is now using that.
Yeah during the early days used to use the AnimateDiff extension. But you can see the slight flicker and inconsistency in it.
Lol early days. I was using Dynamic Prompts like almost a full year before V1 AnimateDiffusion even released. AnimateDiffusion was like a godsend.
Oh lol. How far we have come is incredible to see.
https://preview.redd.it/rvtqh2or076g1.png?width=1008&format=png&auto=webp&s=4951b9ea2837d20ea98d53bdd09db42aec5eb617
Well. Mystery solved lol. So did they just generate one z-image and use Wan2.2 for the remainder?
they posted the workflows
One surefire tell of ai is how every model makes solid clumps of smoke for everything, even the steam from a tea cup.
Workflow?
Workflow or cap.
Your title is bullshit -- crediting Z-Image for a video and claiming it took 30s? FFS.
Can you share the prompts? Good job
I thought I was easily impressed.
Anyone tried Hunyuan Video 1.5 with Z-Image yet
Too much spaghetti buy it may work.
How? My 5070 crashes always...
I'm not?
Is z-image uncensurwd locally?
Yes
vapor cigarette nice XD
Inhaled smoke moves and looks different from smoke coming directly from the cigarette. Amazing.
3060 takes 35 seconds with zimage just for the 800x1200 image, is that what u mean
i really wish people would make something other than "pretty girl"
cool showcase tho
So wait only the first image was Z and the rest was Wan?
can i make hentai with z image
Just asking this in the comments of a random post is crazy thirst lmao
But yes, Z-Image has no filters. You can generate hentai images.
how generations did you stitch together
Holy smokes, that is so good.
full setup? whats your cpu and ram?
Wait z-image can generate videos? 😨😨
This world is gonna go to shit in a year
Would you mind sharing your work flow?
I am still learning ComfyUI and literally have the same GPU as you, so seeing this makes me extremely curious on what I can create off of my setup. I would seriously appreciate it.
This looks really cool
locally? 3060 fr fr
Just tested your exact settings on my 3060 12 GB (driver 566.03 + torch 2.5.0 cuda 12.1) and I’m getting the same 28-32 sec per 512×768 frame with zero VRAM overflow.
The key was dropping the cache to CPU at frame 12 like you did + using –medvram-sdxl flag combined with the new tiled VAE decode.
For anyone still hitting OOM: swap to xformers 0.0.28 instead of the built-in torch SDP; drops another 1.8 GB and keeps the same quality.
30 sec per frame on a 3060 is actually insane for full Z-Image flux pipeline right now. Huge props for sharing the exact command line.
how long does it takes to generate the video output using 3060?
Hi complete outsider here.
What specs was needed to generate something like this before?
Did the spec requirement went lower and the quality didn't drop?
really good and stable result
Yeah, Z image is impressive. Can wan 2.1 or 2.2 work with 8 gb vram. Can't find any perfect workflow. Need help, thnx.
Can you share workflow?
Same question Haven't check the workflow tho
Workflow?
Amazing
What prompt did you use for wan?
You did this on a 3060??
Taps a cigarette to her mouth and vape smoke pours out.
10/10 for realism!
Wait can you do 2d animation with this ?
Smoking is gross and unhealthy
30 seconds on 3060!?!?! Thats amazing!
Hey all, anyone running this on MacBooks? Any luck?
Какая красивая казашка получилась) nice girl, nice work)
This sub is so fucking trash.
Im really not excited to make lots of stuff only to remake more in two weeks when the next big thing comes out