AMD SVP downplayed Intel's new iGPU while AMD didn't deliver anything new at all. Strix Halo is in another class but it's old architecture, same as Strix Point but that is in the same league and is losing. AMD never really won the mobile market, and with this kind of attitude I think it won't catch up. What do you think?

  • I'll believe them (either Intel or AMD) when I see the benchmarks. Until then this is all just pointless noise.

    And the price. I used to be an AMD fan, but as soon as competition with Intel was gone, AMD raised their prices and now act as if they believe they are a luxury brand. Hope Intel gets back into the game and if Intel can slash their prices it might end-up being the right choice.

    At the same price, Intel is dead on arrival. At a serious discount they will take the place of AMD. No one is buying Strix Halo for handhelds, it is too expensive.

    I used to be an AMD fan, but as soon as competition with Intel was gone, AMD raised their prices and now act as if they believe they are a luxury brand.

    That's why it's silly to be a "fan" or "supporter" of one company or the other. They don't care about you, they care about making money and when they have a dominant position they will exploit it.

    At the same price, Intel is dead on arrival. At a serious discount they will take the place of AMD. No one is buying Strix Halo for handhelds, it is too expensive.

    Outside of that one device (Ayaneo maybe?), you're right. But now AMD is also releasing an 8-core version of Strix Halo with the full 40 GPU CUs, which should be cheaper. I expect that we'll see that in more handhelds at the high end. Realistically speaking, it's easy to make the case that on a 7"-9" screen the 40 CUs is way overkill. There's still room for a middle ground that Intel could easily fill.

    They need to release a strix Halo with less ram. 24GB is good enough for portable devices. I don't want to pay for useless ram because of bullshit AI or bullshit comparisons with Apple M5.

    I posted elsewhere about the Framework Desktop with the Ryzen AI 385 and 32GB of RAM. That's a pretty sensible config for a small gaming device, though it has 32 CUs instead of the full 40. Still, that puts it ahead of anything in it's class other than the 395+.

    Interesting. What is the price of that compared to Z1E?

    It's so weird that people treat their computer parts with a cultish following. Most of the people I know don't think about their cards at all and are just happy to play whatever games.

    Super weird to be "team red" or "team green".

    I can't imagine describing myself as a "my computer chip manufacturer fan". Cringe lmao.

    Absolutely consumers win when competition is hot, AMD has a bit too much of a lead ATM so they are cashing in and getting lazy. That said I am glad they are having their day, only because a few years ago they were on the brink of bankruptcy and I really want to see them on a fairly level playing field with Intel... If we end up with 2 juggernauts training blows, having big resesrch budgets, etc we'll get lots of innovation and competitive pricing.

    What no AMD deserves much worse. How can a company fuck up so much and still survive. I would rather good competition rather than competing for the sake of competing. Marketing is bad, products are bad and they keep shooting themselves in the foot. Id rather Qualcomm or some other ARM company compete with X86. Its ARM or RISCV time to shine.

    You had to shovel $1k for a 8 core CPU for about a decade, before AMD came.

    So "it just hiked the price" is BS.

    AMD cannot keep prices low while TSMC, effective monopolist, keeps posting record profits quarter after quarter.

    Bullshit Z1E extreme devices were all together sold for less than a $1000.

    Ever head a look at their profit margins?

    Yes, have you compared it to that of the competitors?

    In general, pricing is not the issue to me.

    Dirty play like blackmailing OEMs, proprietary standards and other misuse of the dominant market position is. (on top of being illegal)

    Yes, I did. AMD could hold prices low, they choose not to.

    Why would AMD "hold prices low"?

    Gross margins are below 50% (48, as in 2022), while NV has it at 70%.

    We know they are worse in PC/GPU market and better in datacenter.

    Yes you are confirming that AMD is as scummy as Nvidia. Not surprised when I see the weasel new leaders they have.

    Nice try AMD employee #1234. Check the price hikes for devices with Z1E vs. HX370 vs 395 AI ++. Steep short term price increase, and AMD is increasing their profit margin.

    Let me guess, you're too young to recognize he's talking about before your time.

    AMD is basically just waiting for their chance to do the bad things. They are a corporation after all.

    Bullshit.

    For starters, pricing is not a "bad thing".

    Bad thing is, pick any piece from blue/filthy green's arsenals:

    1) Strongarming OEMs 2) Strongarming Journalists 3) Proprietary standards

    Yeah while AMD can't even announce their product AT A CONSUMER ELECTRONICS SHOW! and instead ONLY TALK about AI and government work......

    AMD sucks just as bad as Nvidia just as bad as Intel, it's just a constant moving circle jerk as to whom is the least evil.

    9850X3D lol gottem

    It is getting worse. Was better when Herkleman was there. The new guy Hyounh is a joke.

    you didn't just try to suggest that CES is for....consumers..... did you.... seriously?

    What did you think the acronym CES stands for? 

    Yes acronym has "consumers" in the name, but it's not directed or intended for consumers, it's intended for the big industry, the maker's manufacturers, the ones creating services, and the subtle parts of the distributors and such, it was and has NEVER been intended for the end users, the broad consumers.

    Maybe bloody well look up what CES is and what it's for before asking a silly question.

    How about this one: strongarming game developers into NOT including DLSS?

    Ahaha, lovely lie. And even if true, how would that change a lit of "bad things" lol.

    It would further validate what HisDivineOrder said which is that AMD is just another corporation.

    Which they are.

    No, it would not. There is a difference between a shoplifter and a serial killer, even though both are criminals.

    Filthy Green plays in a league of pieces of shit of its own.

    This is so delusional

    You're talking about AMD that made Int8 version of FSR4, which is THE hardest part, and then keeps it away from users to sell more RDNA4 cards.

    AMD had no reasons for such lock-in, it makes sense only for companies dominating the market, to push people to refresh.

    I have not seen palatable proof that FSR4 could be "easily backported" but isn't.

    Waiting? They’ve been busy doing that for years now

    Help us Cyrix, you are our only hope...

    I WISH they were still around. I think their IP got sold to Via, who's not doing anything with it.

    At this point, we might only get competition in the desktop x86 space if the government forces Intel and AMD to license x86 and x86-64 to some other chip designer (like Qualcomm or Mediatek) or Windows on Arm and Linux on Arm start getting wide application support, including office software and games.

    Nobody is interested in x86, otherwise Via would have been bought up.

    We are in the age of the cloud and all software is custom made. Hence RISC+

    Seems like things are going that way. I guess all we can do is wait and see if the Arm takeover gets so complete that Intel and AMD have to join in, and then suddenly have to compete with Qualcomm and Mediatek.

    If that ever happens, hopefully we'll see more competition.

    Lol. Blast from the past. It only took AMD 2 generations of Zen to get the lead. Intel can get it done if they get rid of all the useless VPs and get back the good engineers.

    AMD has always been like this. The OG Athlon FX line from ~22 years ago were $1000 CPUs.

    "Being good" was never about price.

    Nah, Z1E prices were reasonable recently.

    Plenty of folks tested it at CES. Intel was confident enough to let reporters run benchmarks and it's basically around 4050 level. You should be able to run most games at 1080p at medium-high settings in an Ultrabook form factor.

    Wasn't it equivalent more or less to the 4050m as it was power limited to 30 watts?

    They said it rivals a 4050 at 60W. The 4050 maxes out at 100W on paper but it's actually at 80W that it hits its peak performance. a 60W 4050 is about 85% of it's max performance.

    So performance wise panther lake should be about on par with a full powered 3050Ti laptop.

    That plus more advanced ray-tracing cores, it's running doom dark ages really well, AMD is still stuck at RDNA 3.5 and ray-traced games suck on the 890M.

    I wish intel released a 24 Xe Core Variant with a 256-bit bus, double the cores and bandwidth. That would compete with the 5060/5070 laptop GPUs.

    Also Intel has XeSS which is very helpful for handhelds since they can't manage higher wattages, although not all games provide XeSS as an option

    linux and optiscaler is the way

    Really wish Optiscaler had a better installer, something akin to Reshade. The whole manual process for each game makes it annoying to use.

    True but you put command once in your game and you forget about it

    Missing the point. It's about accessibility and ease of use not how often you need to do it. If a tool to bring similar functionality as Nvidia isn't at a similar level of accessible and easy to use as the manufacturer apps, then it's relegated to enthusiasts only.

    Reshade is one of the most popular modding tools for post-processing shaders because it's so easy to install, use, and manage for multiple games on the same system.

    Intel still takes a heavy penalty on Linux in graphics vs. AMD. Hopefully that improves as well.

    Is there really a 4050 or are you referring to the 4050m even when you don't add the m?

    4050m, although it really wouldn't matter either was as the 4060 and 4060m are functionally identical in regards to performance (+5-7% for desktop) so if there were a full size 4050 we'd expect it to be the same or even less of a difference

    they said 60W, but if you look at the laptop they used, it's 30W, probably 60W whole system

    Nvidia paid intel 5B dollars ....Read whatever u can ... But it was to stop intel giving high power gpu to mainstream... .

    It's the price of the final product that will matter.

    And given that Intel has lion's share of the mobile market, I don't see why the would not ask outrageous $$$ for it.

    It is "impressive" only in the "for iGPU" context.

    Based on the benches shown, laptops were consuming around 60W. While AMD"s 370 HX has been shown to be able to game at below 20W, so uh.

    Let's bait for wenchmarks in any case.

    Not synthetic benchmarks, we want to see benchmarks in games.

    You're either intentionally mis-stating this, or truthfully aren't aware, BUT, you can game sub 20w on any igpu. What actually matters is the performance scaling.

    Also, just to clarify, while the 890m CAN game between 6-20w its performance is essentially identical to the 780m, z1e, etc. It only gets impressive at power draws 30+ (signed, a very happy 7840u handheld owner)

    So, what we need to know is how well the new Panther Lake chips scale

    Perf + price + (to a lesser extent, but it still matters) power consumption together is what matter.

    None of the 3 is decisive on its own.

    That's all fair, I'm looking at it from a handheld perspective. Performance at power draws that are actually feasible in handhelds has been stagnant since handhelds have really gotten popular. That is, it has if you want more than an hour of battery life

    Sure, sure. I'm going to wait for proper benchmarks done under lab conditions and documented by more than "Intel let me run this game with the FPS counter on." I mean, the general impressions for Intel are quite positive and if they're true then I hope it spurs AMD to do more. I'm just not going to blindly accept "first impressions" as a replacement for proper testing.

    *With 64gigs of ram at 9600mhz

    Well lunar lake is a monster and competes directly with the z2e both on performance and efficiency, so no reason to think panther lake will be worse. Even if it falls short of Intels claims it will still be the leader until next year.

    It was a Z1E competitor, but AMD gravely disappointed with the Z2E zero performance increase over Z1E and coming at a major price premium.

    Lunar lake came out after strix point. It was squarely a competitor to the 890m. Amd just officially released the cut down strix point as Z2E later.

    I think you’re getting downvoted because it was sold by reviewers as a Z1E competitor as that’s what was available in regular devices.

    Hx370 was only used in niche manufacturers, like GPD only when it launched.

    You’re right though, it was supposed to be a competitor for hx370, but was held back by drivers and other things until mid to late 2025, which corresponded to Z2E (cut down hx370) release.

    Fast forward to now, and it competes/beats both

    Actually, the supposed driver issue was only a MSI Claw specific issue and not a general lunar lake issue.

    https://www.notebookcheck.net/Intel-Lunar-Lake-iGPU-analysis-Arc-Graphics-140V-is-faster-and-more-efficient-than-Radeon-890M.894167.0.html

    Here's a review from September 2024 using a LNL Zenbook S14 with 28w TDP. It had no issues generally outperforming the HX370 in the Zenbook S16. As usual the PCMR-esque dominated crowd on here paid no attention to laptops (which is the real life volume) and only looked at some handheld (which is a niche irl) so they thought that supposed "lunar lake issue" was widespread.

    Both are right/wrong. Intel made their comparisons to Strix point because they’re in the same power class. Panther lake is much faster than Strix Point at the same power level (according to Intel, AMD doesn’t deny that) at 45W. AMD says it doesn’t matter because their Strix halo (up to 120W) is faster which is pretty obvious.

    It’s not technically lying, AMD is just referencing an entirely different class of product.

    The noise is doing a great job advertising for them. A war between them with fighting words will get them tons of free advertising.

  • well yeah you can't compare them because strix halo is on a signficantly larger die wheras panther lake is more comparable to something like the hx370.

    If amd is able to get strix halo at a competitive price then sure it will compete but the issue is that with such a large die I don't think it is possible for them to compete in price with panther lake

    And a much higher power budget. AMD says they win because their 120W chip is faster than Intels 45W chip.

    No surprise to anyone.

    Tbf, it's like dgpu winner is claimed by who has the strongest one, so in that way it's kinda fair.

    But how's the availability? Is halo in laptops actually? What's the pricing?

    And what's the bang per buck on point and this?

    No, not really. Making up a "winner" is stupid and nothing but fanboy behavior. A faster dGPU is generally better because you’re generally not power limited on a desktop. It doesn’t really matter whether or not you have a 5060 or 5090 or whatever. In mobile systems it’s a huge difference. There are entirely different power classes that don’t compete with each other. A thin and light notebook with a 15W CPU cannot have a 100W CPU in it. Panther lake aims towards unplugged performance which is the 45W power class and the same as Strix Point. Strix Halo is a much higher power class that requires a laptop to be plugged in permanently for the full performance. It’s for mobile desktops that are usually plugged in but can be mobile for some time with heavily degraded performance. It’s an entirely different class of product and you won’t find (many) devices where Strix Halo and Strix Point/Panther Lake compete with each other.

    I generally agree but i suspect that the 388h is using a much larger gpu than people suspect. I think its probably ~165mm2 in size, not 55mm2. i suspect the 55mm2 die varient is for the 4xe version, and the 12xe version is 3x that size.

    I also dont understand why strix halo is so expensive. It would be interesting to see bom and packaging costs.

    12 Xe cores is 60% of the 20 cores in the B580 and that's 272mm², but of course that also has GDDR memory controllers, and such that aren't needed on a GPU chiplet, but it's likely that the die for the top SKU is quite a bit bigger than 55mm², I'd say between 90 and 130 mm².

    Yeah, i am gonna contend that either that is wrong, and is the 4xe version, or that intel basically lied on their benchmarks.

    If none of those two things are true, Intel's new graphics architecture will absolutely dominate in the next round of discrete graphics GPUs.

    with B580 intel needed ~80% more silicon to match nvidia performance. Now they need ~10-20% less die area. meaning their performance per transistor basically doubled gen/gen. which is unheard of. Even maxwell (largest architectural uplift in the history of GPUs in the last 10 years) did not achieve anything close to that. And it was a massive overhaul with huge changes.

    So . . . there is something big i am missing . . . or intel is going to dominate in all things graphics going forward.

    i mean you can legit put it over the intel provided slides and its pretty much a dead on match. the PCH is smaller in the presentation photos so they can make it look pretty, but the real chip is an exact match to that leak https://cdn.videocardz.com/1/2025/05/INTEL-PANTHER-LAKE-DEMO-1200x675.jpg

    this generation is seeing quite a significant leap forward in manufacturing technology (Gate All-Around/RibobnFET & Backside power delivery/PowerVia) these usually do result in big gains and that does make it a bit harder to compare to prior nodes. not to mention [GPU is TSMC N3E still quite a bit denser than N4 class tho] its not the exact same uarch as B580, while still a derivative of battlemage, there does seem to be some (rather significant) improvements between xe2 and xe3 https://gamersnexus.net/gpus/intels-new-gpu-xe3-architecture-changes-handheld-gaming-cpus-xess3

    but where a lot of the fps gain will be from is N-Frame and Pixel generation intel want to promote those numbers over native performance. AMD cant do with RDNA 3.5. id expect 8060s to be a much more powerful igpu but it is lacking what is essentially lossy compression for realtime graphics. that is a pretty big deal and i do think it will be what causes Strix halo to be a product that just ages poorly, costs too much for what it is really (308mm2 io/igpu chiplet cant be cheap on 4nm) its really amds pipe cleaner for future packing tech.

    people said they same when the zen chiplets were rumored to be the size they are. you save a lot of area not needing memory controller on that chip would be my guess. d2d bonding is very space efficient compared

    for some perspective strix halo igpu block + media engine block is about 120mm2 on N4P(143.7216MTr/mm2) of the iod, the rest is i/o and the npu.

    the 12 Xe3 chip is 55mm2 on N3E(216MTr/mm2) (so we could napkin approximate about 80mm2 if it was on N4P)

    I think its probably ~165mm2 in size, not 55mm2.

    Which might open an "dGPU sized iGPUs" race.

    NV could be the main victim here Surely AMD can oversize its iGPUs too.

    I actually thought that AMD was forced to do so, by Filthy Green's GPP effectively banning AMD dGPUs. Typing this from G15 AMD Advantage Edition TUF.

    What matters is the intel chip regardless of actual die size runs on quad channel LPDDR memory instead of the octa channel of strix halo and is fitting into mid and small size laptops 15-45w.

    Let's not perpetuate this "octa channel" DDR 5 nonsense; it's a quad channel chip, and the Intel one is a dual channel

    Technicality is technicality, if the channel width is cut down by half but channel number is doubled then they still doubled the memory channel. People just need to know memory channels are not all equally wide just because that’s all they know being PCMR enthusiasts.

    Each DIMM of DDR5 has 64 bits total of bus width, same as DDR4, 3, 2, and 1. And I do understand what you're talking about (not to mention that Strix Halo can't even take SODIMMs), but nobody else talks like that. When you call it an "octa-channel" chip, what people read is that it has as much bandwidth as a Threadripper Pro, because that is how AMD is marketing those chips themselves.

    Well there’s more than one type of memory 🤷‍♂️ PCMR crowd just defaults to DDR DIMMs but the world of mobile is mostly LPDDR from phones tablets to handhelds and most small laptops

    If you're going to be pedantic about it at least be correct please, they both use 16b LPDDR channels so the actual counts are 8 channels for Pantherlake and 16 for Strix Halo.

    You're fighting a losing battle either way, the industry has long since settled on 64b as the standard channel width for marketing, independent of the actual number of address/command buses.

    [deleted]

    Yeah, and AMD says Strix Halo has four channels in their customer facing spec as well, because they have 128b and 256b buses respectively. They use the 64b channel convention as it is a customer facing spec, that doesn't meant they actually have that many channels in hardware.

    The Pantherlake datasheet isn't public yet, but you can see plainly in the actual spec sheet for Arrowlake H that it supports 8 channels of LPDDR5X (additional spec for channel width). Pantherlake will be the same.

    The equivalent AMD doc is not available for Strix Halo but you can see the 16x16b spec quoted by Chips and Cheese here.

    You cannot gang these channels into a dual channel mode, that is not how modern memory works, and there is no allowance in the LPDDR5 spec for 64b channels. The 16b channels have separate command/address buses and burst for a sufficient length (32n) to fill a cache line with each access.

    To be clear I think standardising on 64b "channels" for marketing specifications is a good thing, it allows quick mental calculation of memory bandwidth without having to get into the nitty gritty. But if you're going to be pedantic and use the actual channel count, it's best to be correct.

  • I’ll never understand why AMD is not committing to design RDNA4 based APUs and at this point I just take RDNA 3.5 as a joke because they can’t even support FSR4 on it officially nor the RX 7000 cards.

    It’s like they are losing on purpose

    It sounds like RDNA4 just doesn't scale at all. All the rumors point to them going straight from RDNA3.5 to RDNA5 in APUs, just skipping RDNA4 all together.

    Then how is the exynos 2600 using rdna4 fron samsung if it doesn't scale?

    Where did you find information of it being rdna4?

    Im not original guy you responded to I just wanted to know, becouse I couldnt find it on google. thx

    It's a custom implementation. IIRC it's not even RDNA4 but some Samsung derivative that probably has a ton of changes in silicon design to drive power down.

    The short story is that AMD didn't bother to do low power optimizations in the architecture and silicon design. RDNA5 should change that.

    Rdna4 is objectively faster than rdna3/rdna3.5 at the same clock, power' cu count and bandwidth something 100% desirable for apus. Stop making excuses for amd and their bad decisions. Everything b your claiming samsung did for rdna4 to scale is something amd could have done aswell and has done as amd has made changes to rdna3(rdna3.5) for apu specifically the same can be done for rdna4.

    I'm not making excuses just explaining the rationale, which I don't agree with BTW.

    Yes I know AMD are some lazy mofos. RDNA 3.5 till 2029 for iGPU is cheapo strategy as usual.

    RDNA5 doesn't exist. The actual name for the next generation architecture is UDNA1.

    Mark Cerny talked about RDNA5, AMD's leaked documents have talked about both UDNA and RDNA5

    This is nonsense RDNA5 does exist. I used to work there.

    RDNA 3.5 was the only reason I didn’t invest in a STRIX HALO mini PC. The price is too much for outdated unsupported tech.

    Same here. No new FSR tech and ROCm was just as poor. It works now but Vulkan is often better.. Very disappointing. For AI, NVidia is so far ahead.

    Might be the same reason they stuck with Vega for so long in APUs. At the current available desktop memory (DDR4 at the time) an architecture change wouldn't have made a huge difference.

    Once DDR5 came out for laptops, we finally saw RDNA 2+ APUs (Ryzen 6000 APUs).

    I'd bet once DDR6 starts appearing on laptops we'll get a similar iGPU architecture leap.

    If intel can extract more out of LPDDR5x with B390 then I don't see how AMD can't. Just too stingy to give more die area to cache?

    So basically RDNA4 is just another RDNA1

    AMD probably just didn't bother making a new APU design when they didn't have new CPU core to go with it. Medusa Halo is rumored for 2027 with Zen 6 and UDNA/RDNA5, so the Point version will likely release then too.

    Kepler said in another subreddit that Medusa Premium and Halo is launching in 2028. You're only getting the crappy RDNA3.5 iGPUs for the third time.

    Kepler's track record with AMD stuff isn't great, but everything is possible

    Sure. we’ll see

  • AMD has this “it’s good enough for a while and we’ll release something great that people will forget this happened”

    Vega lasted in mobile for nearly 5 years and got RDNA2 designs. Now, it’s RDNA3.5 being built for mobile platform and betting on that to be good enough until RDNA5/UDNA bridge die designs releases (unverified rumor)

    AMD also has this weird obsession with competitor naming. Sure, it’s meant to confuse buyers but it’s hurting them than helping, maybe it does help in terms of inventory.

    They’re not intel-like of stagnation. They’re competing but not for us in the consumer market and we’re just getting scraps until enterprise trend die down (currently AI trend/bubble).

    Well, it’s understandable as Zen designs are really focused in Epyc and scale down to Ryzen SKUs.

    And the 400 series is a bad refresh when Ryzen 6000 mobile is the definitive refresh they have done, Zen 3+ and move to RDNA2. AMD could’ve done similar commitment but it’s not currently.

    Also, AMD forgor Strix Halo laptops are still nowhere to be found aside from 1 or 2

  • AMD is going to f--- around and let Intel catch up, in CPUs and GPUs. They're slacking off on that stuff trying to focus on data center. Hope that works out for them.

    AMD has and always will be their own worst enemy

    Intel: but the enemy of my enemy, is my friend.

    Intel 🤝 AMD

    we're cooked guys /s

    we're cooked guys /s

    No sarcasm there lol

    All tech companies are colluding right now, seeing as american business laws don't matter anymore

    Have the business laws mattered since the dawn of post-dialup internet?

    I don't think they have. Where's our fucking bell-style breakup? 41 years ago was the last real monopoly breakup... and they let it come right back.

    EU does half measures and they don't come to the rest of the world. It's a travesty that we don't have nationwide GDPR or force allow sideloading on ios.

    Yes, a frienemy.

    An enerend of sorts

    It's not like they aren't developing something this whole time, releases are planned many years in advanced. Intel will have some rope and then will get inevitably leap frogged

    It's not like Intel isn't doing the same. Panther Lake and ARC are holdovers of things developed under Gelsinger.

    AMD is going to f--- around and let Intel catch up, in CPUs and GPUs.

    this is what we actually need: competition. AMD kicked intels butt, now intel is kicking back. it's a win for us either way.

    I hope Intel will catch up and encourage AMD to compete. Having cleat leader in CPUs or GPUs is bad for consumers.

    I mean, we know that AMD is innovating. They literally showed Zen 6 at CES. Its just not ready yet for mobile, and Intel caught up. Same thing happened with Alder Lake, where intel released that before Zen 4 was ready.

    So AMD having much faster iGPUs for decade or more did not do much.

    But now Intel rolling out something at unknown price/power package will absolutely decimate AMD.

    Regardless of what will happen, "AMD's fault" indeed. (amazing silicon designers and experts at everything posting for free on reddit have convinced me)

    But now Intel rolling out something at unknown price/power package will absolutely decimate AMD.

    Are you finished, or you wanna make up some more shit I didn't say?

    Oh, you didn't say AMD was "slacking off" and letting intel "catch up". Figures.

    Where do you get this "absolutely decimate AMD" nonsense from? I did not say that.

    So you can exaggerate what I said and make it mean something completely different. Got it.

    Welp, I got better shit to do this morning. See ya. Feel free to have the last word.

    Laptops haven't really been AMD's focus, and apart from Zen1, AMD's focus has been mostly on data center, with desktop being the natural offshoot.

    Yeah they ain't immune to being complacent.

    And bad press doesn't make Intel stay bad.

    Yes, their leaders aside from Ms. Su are all holdovers from AMD almost bankrupt times. Software sucks, graphics sucks, and now they are taking CPU down the drain too with high prices. Shitty leaders, lead to shitty outcomes.
    They post high useless benchmark numbers no one cares about. They cannot compete with Apple in terms of battery life, and get creamed at low power levels.

  • Amd really doesn’t gaf about anything other than data centre these days

    News at 10: "Companies prioritise profits"

  • Idgaf when Strix Halo products are nowhere to be seen (notebooks)

    so either AMD doesn't have the capacity to produce them, or OEMs aren't interested, neither option is a compelling reason for AMD to focus on mobile

  • Except that the B390 will be far more common as it will be seen in far more laptops. Yes, the 8060S & 8050s can be found in some laptops, but for the laptops you'll find in places like Currys, Best Buy or Mediamarkt, the B390 will be the most powerful iGPU you'll likely find & it'll happily outdo a Radeon 890M

  • AMD is at the point where Intel was before they went down the route and are recovering, history repeats before it's too late.

    Not going to believe either until we get actual benchmarks and results.

    We’d wish they were, but they’re not. Intel was struggling on all fronts due to their fabs. Amd is actually moving super fast in data centre so both epyc and instinct which is where they believe their money will be. They just don’t care to do anything in the consumer market.

    Yeah the people saying AMD is stagnating are just wrong. AMD is kicking all kinds of ass... They just don't care much for the consumer market currently.

    The other issue is that there's no point releasing a new line of products when no one can afford anything because nand flash is so expensive.

    Companies CAN afford this because they need to ride the ai wave, but consumers can't because the average PC cost almost doubled.

    But that’s also more because of the increased demand from AI than anything else.

    Sure. But they're still not stagnating. Since December 2023, when Mi300X and Mi300A were released, they released Mi325x, Mi350x, Mi355x and soon, Mi400x.

    The latest gen is running HBM3e and 3nm CDNA4. Those are some immensely advanced products.

    On the Epyc side, they've got the 9965, a 192 core 384 thread monster that Intel can't even attempt to compete with.

    Intel hasn't advanced in server stuff at the time either. Their top SKU was 18c in Haswell, and that hasn't moved until like Cooper lake? So from 2014 until 2020, they haven't moved an inch in server space either.

    AMD has gone from 32 cores first gen in 2017 to 6 times that in 2024.

    It's honestly not even comparable. AMD advanced more every generation than Intel did from Haswell to Kaby lake at the very least.

    It's honestly not even comparable. AMD advanced more every generation than Intel did from Haswell to Kaby lake at the very least.

    Intel hasn't advanced in server stuff at the time either. Their top SKU was 18c in Haswell, and that hasn't moved until like Cooper lake? So from 2014 until 2020, they haven't moved an inch in server space either.

    Since we're talking about the server side now, Haswell-EP went from 18 cores maximum to 22 core Broadwell-EP to 28 cores on Skylake-SP. Cascade Lake-AP (rare bespoke sku) went up to 56 cores per socket. "Haven't moved an inch" is inaccurate.

    In CPUs they’re losing market share to arm, the Datacenter GPUs are mostly bought by companies who can’t afford NVIDIA

  • Meanwhile AMD keeps putting out new chips with years old GPUs.

  • Core Ultra 2 is already a better mobile soc. I don't know why AMD thinks 12-16 cores is more important than battery life when it comes to laptops.

  • It does sound complacent but ultimately the proof is in the pudding.

  • If it is not even fair to compare (because Strix Halo is WAY more watts) then why is AMD comparing them? B390 will exist, Strix Halo virtually does not in laptops. 

  • AMD is playing the same intel book a few years ago. Except now instead of 14nm+++++++, it is RDNA 3.5555555.

  • Lot of markets and Intel did bribe the oems for decades and still do

  • The Intel igpu has a better upscaler by far. FSR3.1 is a third class competitor in comparison. People have been crying out for AMD to release FSR4 for RDNA3.5 but AMD has some seriously stupid execs in charge.

    Halo is so much faster that the upscaler difference doesn't matter at all. Of course, it is probably also bigger.

  • Beware hubris.

  • There are already benchmarks, look them up.

  • Don't they already have an integrated GPU that's on par with an RTX 4060? According to Framework?

    Yes, Strix Halo

  • strix halo is nice and all, but too prohibitively expensive to be considered for many people

    arc b390/b370 will be available in much cheaper products for which amd doesn't have a proper answer to atm. amd's next lineup can't be lazy if they want to stay competitive

  • The GPU doesn't matter if you don't have proper drivers and they are so far behind still Intel.

    Great progress, but the drivers are still going to be the thing that makes people say no.

    If intel keeps on chugging away and they work with all the DirectX games backwards and going forward.

    I'm talking past DirectX games, you can't just worry about the new games there's games that are older that don't run well.

    It took AMD many many years to get decent drivers, Intel I don't know if they're just focusing on hardware and not the drivers, but that so far is what's been holding it back.

    Hopefully they can release a true dedicated GPU back in rival something that's out there at a much better price that will bring at least some competition back until the AI scam is over.

  • Panther Lake is using a superior process technology. So they are right. But it doesn't matter as customer will choose what's better. But until AMD has something out that uses 2nm then yes they will be behind probably

  • I hope Intel stays competitive and AMD also brings its best to the table.

  • They should be worried about DLSS 4.5 though. Fix stuttering on FSR 4 and improve image quality

    And having fsr4 supported in mobile at all

    FSR 4+ may be great but game support (number of titles + GPUs supported) is embarrassingly low

    Dlss 4.5 not that great in my opinion. It fixes some ghosting but creates more shimmering because it has so much sharpening. I had to dial back to 4.0.

  • I just wish Intel would make a very cut down panther lake offering to be the successor to the N1xx/N3xx line of efficient chips that have found their way into mini PCs.

    wildcat lake.

    only issue it seems to be using 2 P cores and 4 LPE cores instead of E + LPE

    Wow! Excellent. Hopefully we'll see the products coming to market soon, and hopefully the 2 P cores won't matter as much since we're seeing a major lithography improvement. Intel is really impressing me lately.

    It's 6C/6T and 2x Xe3, so don't expect a whole lot of performance. This is Intel's Mendocino.

    The modern Atom is fine by me, the N100 had more performance than a 6500t so this one should have more than enough compute for many different use cases whilst retaining low load efficiency. This product could potentially obliterate even the newest and best SBCs for home lab use cases, even regarding efficiency.

  • amd unfazed? I bet they are talking big shit again then will get absolutely demolished (as it happened with vega too)

  • Yeah it's old architechture, that's the point, Intel moved to the lastest node and barely manages to eek out a win. A win is a win nonetheless, but AMD still have plenty to dials to turn up.

    70% faster being “barely eke out a win”? Go ask why amd is stuck with 18 month old architecture despite intel managing to replace arrow lake after 12?

    This. Plus even lunar lake outperformed Strix Point in many scenarios already. Intel is at least one generation ahead here

  • I don't understand the fuss about iGPUs? Like why do they assume the average Joe would care about an IGPU at all? That's maybe 5% of the market and even then...most of them would get a dGPU anyways.

    And apart from the GPU, what's special about the CPU? Combining (Lunar Lake) efficiency with (Arrow Lake) power? Sorry but my Ryzen AI 7 350 does that already. The top of the line x9 388h is about ~10% faster in single core aka the only thing that matters and will probably be in 2500€+ laptops whereas my 7 350 is in 500€ laptops.

    I tried a 285h laptop besides the AI 7 350 and not only did it run hotter and less efficient, it also felt less snappier.

    And the AI 7 350 was designed as a Lunar Lake competitor anyways so it was never worse in efficiency and ahead of Arrow Lakes like the 255h in that regard.

    So I don't see why anything should really change...?

    You’ve got it backwards. The majority of laptops use IGPUs. IGPUs being as powerful as integrated graphics allows for cheaper thinner devices that are more power efficient. The entire intel CPU/IGPU performs on par with a 4050 at 60w at only 45w. When you factor in the 10-15w the CPU takes with the 4050 and you’re looking at similar performance at like half the power.

    Being power efficient opens up a lot of form factors to be able to game with such as thin and light laptops, tablets, or gaming handhelds

    It's not the 4050 at 60W. The laptop they compared only allows for 30W to the 4050. Nvidia's specs for the 4050 is 35W minimum, so I don't know how Dell even got to 30W. Below a certain wattage, gpu performance decreases exponentially because a minimum level of power is required to even have the gpu turned on.

    Panther Lake is built on Intel 18A, which is supposed to be much better than the 'ancient' TSMC 5nm the 4050 is built on. The 4050's cpu is also Arrow Lake, which is less efficient than Lunar Lake. Again, that skews the agenda.

    You can already game on thin and light devices with discrete graphics. Laptops like Asus's G14 is only 3.3lb, but sports a 4060 which is like twice as fast as intel's new igpu. The dgpu turns itself off when on battery, and the integrated graphics takes over. Anything more intensive should be used with a charger plugged in.

    In short, paying for a big igpu doesn't make much sense for anyone interested in performance. And gaming handhelds? Does anyone really care about those useless bricks for investment into integrated graphics? It's not like the cost of Panther Lake is going to be cheap when its laptops start at $1300. With that kind of money, you can get 2025 Asus Zephyrus G14 with a 5060 and blow its shit out the water. Or for those on a budget, 5050 laptops have been seen for $600.

    Integrated graphics have come so far, pairing Intel's newest 18A Panther Lake with an RTX 4050 could still make a lot of sense.

    The slide specifically says 60w sustained for the 4050. I couldn’t find your claimed 30w anywhere. If I’m wrong I’d be interested to see where you got the 30w number from because that would be shady by Intel

    Intel Performance Index Search 4050. Dell 14 Premium is this laptop, with a TGP of 30W

    In PCWorld's test, they got 48 fps for Cyberpunk. My 4060 gets 73 fps using 60W using high settings and 2880x1800 DLSS instead of XeSS. That's a game where Intel gpus performs well above average. A 4060 optimus laptop uses around 3.5W an hour at idle without the screen turned on. With the screen and igpu powering it, it's about 8W. Having discrete graphics in modern systems doesn't really impact battery life anymore.

    So yeah, Intel was intentionally being misleading, hoping people wouldn't actually bother to check their figures. Panther Lake's massive igpu still doesn't make sense for anyone who cares about performance. Maybe a little bit for battery life, if it's more efficient to drive high resolution displays, despite its large size being wasteful. Most igpus go into office pcs. In terms of gamers, Steam's hardware survey suggest that desktops and gaming laptops with dgpu are the biggest share.

    I think you were looking at the old core ultra series 1 testing not the current CES testing. For their claim they used the following settings:

    Intel B390: Processor: Intel Core Ultra X9 388H (Panther Lake) PL1=45W; tested in Intel reference platform; Memory: 32GB LPDDR5 9600; Storage: Samsung PM9A1 512GB; Display Resolution: 2880x1800; OS: Windows 11 26200.6725; Graphics Driver: Intel Arc Graphics Pre-Production driver; NPU Driver: Pre-Production driver; BIOS: Pre-Production BIOS; Power Plan set to Balanced, Power Mode set to "Best Performance".

    NVIDIA RTX 4050: Processor: Intel Core Ultra 7 255H (Arrow Lake); tested in Dell 14 Premium with Nvidia GeForce RTX 4050; Memory: 32GB LPDDR5 8400; Storage: Samsung 9100 Pro 1 TB; Display Resolution: 2k IPS; OS: Windows 11 26200.7171; Graphics Driver(s): dGPU: 32.0.15.8180 (GeForce 581.80) & iGPU: 32.0.101.8250; NPU Driver: 32.0.100.4404; BIOS: v1.4.0; Power Plan set to Balanced, Power Mode set to "Best Performance"; Dell Optimized = Ultra Performance. Battery Size: 68Whr

    Nope, I was looking right at the current testing. I do have to make a correction though: Panther Lake's cpu is built on Intel 18A, and the gpu is built on TSMC N3E

    Let's summarize. In a head to head battle, Intel claims the 45W Panther Lake Core 388H with its "massive graphics" is 10% faster than a 30W 4050 paired with a 30W Arrow Lake 255H. Panther Lake's cpu is built on the most advanced silicon process node 18A, designed to compete against TSMC's N2 (2nm) which is set to release in products in the second half of 2026. Panther Lake's gpu is built on TSMC N3E, a significantly more efficient N3. The 4050 is built on a custom 2020 TSMC 5nm variant, and Arrow Lake is built on TSMC N3+6nm. Arrow Lake is designed for specifically for high power use vs the low Lunar Lake and Panther Lake.

    The future of integrated graphics is truly bright. I can see it being exactly where it is now. Vital for battery life in office laptops and actual gaming laptops with discrete graphics. Big igpus? Mostly irrelevant and a waste of money.

    Ehh not the mayority but all computers use iGPUs. The thing is, for the average Joe aka 95% of the market, there won't be a difference in the usage between an Intel Iris or RTX 5090 dGPU. And the efficiency would only come into place if they would game on battery (who does that anyways) or create/edit videos (again, virtually no one would do that without a dGPU). And even then, having your laptop drained in 2 hours 15 minutes instead of 2 hours is not "gamechanging"

    So it does not affect the efficiency at all during webbrowsing, watching videos, creating documents etc.

    Thats also the reason why Intel has the non X 5,7,9 which will properly be by far the more demanded version as, again, the average Joe does not care the slightest about iGPU.

    And apart from the iGPU, PTL is just a tiny step up from the Ultra 200 series...

  • AMD doesn't want to bring RDNA 4 to APUs, so as not to give FSR4 to users other than those with dedicated GPUs.

  • a brand new product on a newer node is better than an older product on an older node?! who knew?