Today we take another look at the Ryzen chipset and discuss further optimization’s. Memory is always a question that comes up and historically it hasn’t really had an impact for audio, where the bottleneck in performance often ends up being elsewhere in the setup.
Even with the previous generations on Ryzen where the optimal memory advised were around the 2666MHz (first generation) – 3200MHz (second generation) clock speeds and in our own testing moving up from 2666MHz to 3200MHz on either generation didn’t get us any favourable results in audio benchmarking, although it did help for video rendering workloads.
As such, I went with the previous suggested the best memory when testing around launch and AMD has publically outlined that the optimum speed is now 3733MHz with a CAS16 timing as this puts the memory on a perfect 1:1 ratio with the internal Infinity Fabric bus arrangement.
At this point 3733MHz RAM is still not overly common, even more, uncommon is the super low CAS 16 kits. I’ve currently got a 3733MHz pack being shipped to me (although only CAS17) for further testing when it arrives, although I’ll keep that for when I do a full retest in the coming week.
The results I have today is more of a comparison to show some basic gains and at a slightly cheaper price point. Above 3600MHz memory carries a sizable price premium and some of you may be wondering what gains can be achieved at what price points.
To do this testing I’ve got results generated using the 3200 RAM used in the previous testing, 3600 RAM with CAS18 which are the standard packs we use here and then I’ve run the same 3600MHz RAM clocked up to 3733MHz, which in real-terms ended up being around 3725MHz running in Windows.
Stock CPU 3200MHz RAM
Stock CPU 3600MHz RAM
Stock CPU 3725MHz RAM
The DAWBench DSP test gave us some small gains on the 64 buffer and then became much more apparent at larger buffer sizes, where we’re talking closer to 8% at the 512 buffer.
Stock CPU 3200 RAM
CPU Over load Point
Stock CPU 3600 RAM
CPU Over load Point
Stock CPU 3725 RAM
CPU Over load Point
What we can see here is similar small gains moving from 3200MHz to 3600MHz, with it being fairly marginal overall moving up at this level.
Clocking the RAM up towards it’s advised 3733MHz clocks in this instance produced us more notable gains with excess to 10% being seen at most buffer sizes. I’ll also note that that between the 3600MHz and 3725MHz results the memory hole started to disappear as the CPU overload point moved upwards. I suspect and remain hopeful when we see perfectly matched 3733MHz RAM with CAS 16 timings as they’ve advised, that we’ll finally see that performance hole disappear for good.
Given that 3600MHz RAM is only about 10% more costly than 3200MHz then that’s a no brainer of an upgrade, but the jump above that to 3733MHz can easily cost twice as much again depending on the quantity and size of RAM sticks that you need.
I’d expect memory costs to continue to drop over the coming months as no doubt many firms will now be ramping up 3733MHz production over the coming months. Our own provider was also on the back foot, having already killed off their 3733MHz supplies due to a lack of customer interest before the AMD launch, it’s only now that they are rapidly bringing back old lines and looking to flesh out their ranges to support the popular new platform.
In regards to overclocking the advice that AMD put forward early on appears to be very true with faster memory installed. In initial testing, I overclocked the systems and ran 3200MHz memory and saw some solid gains. With the faster memory, we see the same if not better gains and we can also run the CPU cooler at stock.
I did note that I had both an overclocked chip up and running with 3600MHz RAM and the memory performance hole pretty much disappeared completely, but the system wasn’t stable under heavy loads an there is no way you would want to run that in a production environment.
Indeed, it seems that overclocking is more or less impossible when taking the memory over 3200MHz at this time, although given the performance boost we see with the faster RAM this isn’t a complaint. This might even improve in the future as the BIOSes get optimized and better high-speed memory continues to arrive, but it’s very much something to be aware of if buying a machine at this point in the lifecycle.
One thing that the results have left me wondering, especially with the closing of the gap as we approach the 3733MHz optimum is has this always been the case. 3733MHz didn’t exist when Ryzen generation 1 arrived and I’m not even sure if it was a widely available product when Ryzen 2 launched. Even now it carries a rather hefty cost premium and I have to ponder is this simply a case of the memory market catching up to the Ryzen chipset.. has Ryzen so far simply been ahead of its time?
The last bit of testing I’m going to carry out over the coming week is to retest with the information that we’ve picked up since the first look. It’ll now be running stock clocks with the 3733MHz RAM that is shipping to us now and it’ll be running a none hybrid test version that of a freshly expanded test setup.
In light of recent testing on the new AMD platform, a number of questions arose and I’m going to spend some time working through those over a couple of follow up articles.
The first one to tackle is Cubase which I ended up pulling from the testing this time due to uncertainty on the results being returned. This was to be the first time using Cubase 10 in the benchmarks, a change I was keen to move up to in light of them making adjustments to the engine to resolve the MMCSS issues that crept in on the previous build due to low-level OS changes. We had been working with a tweaked registry workaround in C9.5, so we were rather keen to see what other gains were to be had in working with the latest iteration of the software.
Whilst I’m looking at this I also want to start off by tackling another question in the process, namely the one of ASIOGuard.
ASIOGuard is there to stop dropouts and overloading whilst recording, it’s essentially another buffer designed to keep you safe from digital gremlins. where It also means that your trading off some degree of performance overhead in order to achieve that extra stability.
Normally, we will test with ASIOGuard disabled essentially because we’re looking to test the hardware and not ASIOGuard itself. The first result I want to post is a Cubase 10 with ASIOGuard Off/Low/Normal/High and at various buffer setting.
Firstly you’ll note that ASIOGuard off is far better performing, although I’ll note this still isn’t quite as I would have expected.
CPU Over load Point
AG On Low
CPU Over load Point
AG On Norm
CPU Over load Point
AG On High
CPU Over load Point
So, as it shows above, ASIOGuard rather skews the performance for us in testing. You can note that with a CPU overload point of roughly 90% maximum the ASIOGuard off setting gave us both the highest total polyphony and succeeded in leveraging the most amount CPU in that test.
Now with ASIOGuard on this isn’t the whole story. At each buffer setting the total performance was still there above the points where I drew the line. However, I couldn’t cleanly push past the points that I’ve indicated.
What do I mean by this?
With DAWBench testing, the way we take the metric is to simply keep adding more and more instances of whichever plugin it might be until such point where the audio overloads and then we pull back slightly and take the measurement.
What I was seeing here was the audio breaking up and then not coming back until I reduced the active channels back to the point that I’ve recorded.
So, for example, the chart above shows ASIOGuard – Low overloading on the 512 buffer at 560 notes. If I keep adding more instances until the point it crackles and falls over, it’s more like 1100 poly with 95% CPU.
So, why are the results on paper looking so low?
Because whilst I can build up to 1100 instances I then can not start/stop cleanly without it replicating the audio cut out and recover issue I note above.
So, say I take it to 1000 note poly and the audio is playing away fine. If I stop the project at this point the audio will stop playing. If I then proceed to start Cubase playing again it will immediately lock up, refusing to start playing audio again until I reduce it back to the point that I’ve noted on the chart.
Essentially it’s behaving as if it’s overloading and choking, which doesn’t make for a smooth session when recording.
So, the next question that comes to mind, is this an inherent issue inside of the Cubase 10 engine?
Above we see the same set of ASIOGuard On/Off tests running on a 9900K at 4,9GHz all core and running 2666MHz RAM.
CPUOver load Point
AG On Low
CPUOver load Point
AG On Norm
CPUOver load Point
AG On High
CPUOver load Point
The first thing to note is that the ASIOGuard “off” setting does look to offer us the sort of result curve that we would be expecting to see in this testing situation and with a minimum of 90% CPU being leveraged rising quickly to 100% it’s performing as we would hope to see.
The ASIOGuard itself is designed to sit as a safety buffer and at tighter settings you can see where it fails to keep up as the CPU overloads at lower buffer settings, but when working ideally it will tend to trade off performance for stability at lower ASIO buffers as well as allowing for potentially a little more overhead to be extracted at more relaxed settings.
But that aside, the results above should indicate why we prefer to run any testing in Cubase with ASIOGuard itself disabled due to more balanced results as we’re testing just the hardware and not the ASIOGuard itself.
What was also apparent was that I wasn’t seeing the “rubber banding” effect on the Intel system and that the point where it fell over, it did pretty much fall over at its audio load break-up point.
There was none of this being able to push it 200% past it’s highest start/stop result and on the Intel testing it would prove to be that the point where it started to crackle that was also the same point where it would fail the stop/start part of the test.
So, on the Intel setup, these were the respective results for Cubase and Reaper testing where the performance curves look to be as we’d expect in regards to the point of audio drop out in each instance.
Intel 9900K Test
Cubase (AG Off)
CPU Over load Point
CPU Over load Point
AMD 3900X Test
Cubase (AG Off)
CPU Over load Point
CPU Over load Point
So, the reason I ultimately dropped Cubase from this round was the above. I just wasn’t sure at the time what or why the results were skewed in the fashion that they were and wanted to go with a test that I considered to be less aggressive with trying to optimize its own handling.
To note, I did a similar shoot off on Ryzen 1 & 2 setups but wasn’t able to close the gap in any meaningful way although I was using an older build of the sequencer engine at the time it should be noted that I’m seeing the memory hole tighten up slightly with faster RAM than the 3200MHz which was recommended on the last generation, but is now being eschewed in AMD recommendations in place of the newer 3733MHz packs which they’ve now noted is the optimum clocking speed for working with Ryzen.
I can’t help but wonder if this was always the case and it was simply the prohibitory high price of 3600MHz+ RAM two years ago (3800MHz is still rather high cost at the time of writing), is this a case of Infinity Fabric making it to market a number of years before the supporting hardware was widely available to the general public?
At the moment the Vi test is being updated, so I’ll look to do a pure VI test in the coming days and will republish with updated results as well as delving further into the memory handling side. I’ll note that the performance curve that I saw in testing this time mirrored my first run with the hybrid test build, but I’m also keen to see how it plays out doing a full retest with it across the board.
To draw this article to a close, Cubase 10 on the Intel side appears to be behaving as expected but the AMD handling has proven erratic enough for me to question it in regards to giving the hardware being examined a fairer test. For Cubase users and importantly for those of you working with large sample libraries, this raises questions on the suitability of AMD for handling your workload.
For the rest of us, it raises the question on whether or not its Cubase or Reaper that is the exception to the rule here and right now I’m not well placed to answer it. I understand there are further builds in the pipeline, so more testing will be carried out there as and when it’s ported.
*Please note, further testing is currently ongoing due to new test builds being made available and due to community feedback requesting a few different usage scenarios. Some interesting results are being seen, updates will be posted in detail over the coming week.*
The AMD Ryzen 3000 series has been well anticipated and in fact the last time there was this much buzz around a release, it was quite possibly the first Ryzen generation a couple of years back. At that time we saw the platform pull AMD back into the limelight and whilst the results were mixed across many usage scenarios, it was clear the platform certainly had the potential to live up to.
In the interim we’ve of course had the 2000 series, which built upon the gains we’d already seen and continued to close the gap even further. AMD, of course, has continued tweaking the platform throughout this period, acknowledging some of the internal latency issues we also saw in the first round of testing and generally showing positive improvements along the way.
I’m coming to this with a short delay after launch due to a shortage of hardware over the first week. The mainstream reviewers looked to have got their hands on them in the week prior to launch, so there is already a lot of coverage out there regarding the more common applications and the hardware has performed well. The upside of this is that I only managed to put one of the chips across the bench before the launch day AGESA BIOS update surfaced which after applying I also saw a small improvement to performance and so ensured that all testing was done with this in place.
I’m going to be putting the 3600, 3700X & 3900X through their paces here and as normal I’ll be looking to max turbo clock them where I can. This ensures we don’t have a slowest core scenario tipping things into overloading earlier than necessary, but it does mean that on a stock setup you should allow for some variance.
The 3600 for a none X series chip did well, allowing us to take it to a steady 4.2GHz on all of it’s 6 dual threaded cores. The 3700X allowed us to max out all of 8 of its cores at 4.4GHz, and the 3900X managed 4.3GHz whilst sitting around 70 degrees even under maximum load. I managed a 4.4Ghz but not without seeing a huge increase in temperatures alongside it.
Whilst the promotional headlines have been focused on the 7nm die shrink, it should be noted that the entire architecture has received an overhaul in the process with AMD noting a sizable 15% increase in IPC performance. Other notable improvements include further tuning to the internal memory latencies and a sizable increase to the L3 cache, both of which should be beneficial to our performance scores.
For the Ryzen testing, I’m using the X570 Asus Tuf board which was the first of the new range to land in the office. It has been fully updated prior to testing with the latest BIOS and running 3200MHz Corsair LPX memory
With news over the past 12 months of security concerns and various performance affecting patches that have since followed, I’ve set up a new test bench where the Windows 10 build being used is the current 1903 with all drivers being freshly installed. Also given all these changes I’ve benched a number of the Intel chips in this round of testing, with both of the Z390 and X299 boards being fully updated Asus Prime boards.
On top of that reinstall and due to exceeding the benchmarking overhead in the last round with the largest available chips, I’ve made a few modifications to the standard DAWBench tests this time as I suspect that I run the risk of easily surpassing the tests in their default forms.
With the DAWBench DSP test, the SGA1566 plugin now has all instances set to the high-performance setting, running at 24/48 and this gives us plenty of headroom for our needs.
The Kontakt based DAWBench VI test, on the other hand, I would expect to quite quickly outperform with the CPU’s we have here.
I’ve attempted to soak up some of the available performance we have on offer by applying two instances of the SGA1566 plugin in high-performance mode to each of the 16 hidden sine tracks, which on my 9900K testbed took up about 50% of the available performance. This should give us a reasonable baseline to start from and still have the ability to check for any performance affected by latency.
Since the Native KA6 interface has had a generational jump and the older model is now discontinued, I’ve now gone and retired the old testing interface. I’ve now switched it up to an RME Babyface on this round of testing and shall be sticking with that going forward.
Another change is that both sets of tests are being reported using Reaper this time. I completed the testing initially using both Reaper and Cubase, but upon looking over the results I saw an irregularity that sent me back to retest again using Reaper for both sets, more of which I shall cover in the results section.
Having made all those changes, please be aware that these results and prior results are in no way comparable. I’ve changed the following and have retested all CPU’s listed in the results.
OS version. BIOS versions. Reaper version. Cubase removed. Different audio interface. Modified DAWBench versions.
The one benchmark that can still be used to loosely compare is CPU-Z and that is where we shall start.
I’ve used the inbuilt 9900K metric as a baseline comparison. I’ll note that the result they have recorded is within 10% of my last round of testing, so it seems quite fitting to use it here.
So first up DAWBench wise is the classic DSP test, running the SGA1566 variant as covered up top.
This test sees us stacking up plugin instances on a thread by thread by thread basis until the whole CPU hits a breaking point. An impressive result from the sub £200 AMD and the top end 3900X is trading blows with the £1000+ Intel chips. This test is our raw performance test and as hoped the results are impressive.
The second chart we have is the DAWBench VI Kontakt test, which I imagine is going to be the interesting one for most readers following on from prior writes ups.
And interesting it certainly was. The cross core latency we’ve seen in earlier models has gone on the 3600 and we were hitting 95% on the 64 buffer on the 3700X with 100% leveraged on the 128 buffer, both of which are certainly welcome sights.
The 3900X with its new dual chiplet design was the only model to not come away with a clean sheet, although given we have an extra die section to deal with, this might not prove to be a huge surprise. I would expect it to mature in much the same fashion as the other chips below it in the range have done over future iterations and no doubt looking forward to them fine-tuning this new design further.
With the first round of testing as noted up top, I used Cubase 10 initially for the DAWBench VI testing and everything looked great right up until the final test. On the 3900X I saw a 30% performance drop at 64 buffer with only 70% of the CPU being used at maximum load and 128 gave me 80% – 90% with the full CPU being leveraged at the 256 buffer and above mirroring the low buffer latency we’ve seen in previous generations.
It’s at this point where I wondered if we’d see any difference with a sequencer switch out, it didn’t help in the previous testing, but C10 had a few major changes under the hood over earlier versions and I’m keen to see if any of those have impacted here. I rebuilt the new test in Reaper and took another look at it with Reaper offering differing results. I still saw a performance hole at the 64 and 128 buffers, but this time it was more like 80% (64 buffer) and 90% (128 buffer) of the CPU being leveraged before it started to top out.
So, interesting to note variance drift between sequencers and how efficiently Kontakt appears to be running within them. I did upon seeing this completely re-bench in Reaper and those are the results presented. but do be aware of the sequencer variance that appears to be in play.
It’s also worth noting that some sequencer may not be able to address the full 32 threads efficiently, even if it can currently see them. I can foresee a lot of optimization being required by various DAW coders in order to ensure that their software can still keep up with the new hardware that is currently emerging.
So, overall thoughts are one of being largely impressed at each given price point. I don’t think I’d drop as low as the 3600 personally, but the 3700X has a strong claim as a superb all-rounder at the entry level and both of these chips seem to have largely shaken any concerns that remain about internal latency handling.
The 3900X has the noted performance latency still, although it seems to vary between applications and we don’t see that occurring with either the Reaper or Cubase test on the Intel side. I wouldn’t normally be happy with seeing anything drop out at 70% or 80% load but there is certainly an argument that it still offers reasonable value as even then it exceeds the 9900K which is currently sat around the same price.
Certainly, anyone working above a 128 buffer has little to no concern there as it appears to recover in full by the 256 buffer.
So there we have it, a great first outing for AMD’s 7nm design. I’ve seen comments aplenty about the lack of overclocking capabilities and yes we’ve come up short of the all core clock that I was aiming for in two of the tests, but I do kind of expect that from any first generation chip after a die shrink. I’ve certainly no doubt that we’ll have refinements over the next couple of years that will successfully extract every last bit of performance from the Zen 2 platform.
My only reservation at this time is compatibility with third-party hardware and mainly interfaces. We saw some compatibility issues with Ryzen 1 & 2 with some PCIe sound cards and some USB based interfaces. ASMedia have a bit of a poor rep on Intel board where they’ve provided their third party USB3 solutions as audio devices don’t tend to play too well on them. We saw similar incidents with the implementation they packaged for the Ryzen board on generation one and thankfully it was less common on Ryzen gen 2.
Ryzen Gen 3 has an AMD designed USB implementation but built around an ASMedia package and at this point, I’ve little idea how it will hold up with all the device we have available. I was testing using a Babyface Pro this time around, so that’s validated, but I would certainly check with user groups for your key devices for any compatibility issues prior to buying.
Looking forward, unsurprisingly Intel’s next refresh details have started to leak across various sites. The Cometlake refresh has a 10 core chip and various price reductions being dangled via those leaks, which obviously look to challenge this Ryzen release when they arrive.
Whilst some people might already be rolling their eyes at this leak timing, those who remember back to the last time we were entrenched in some good ole CPU wars, they’ll remember that this is pretty much business as usual and I can see price wars on the horizon as AMD snatches more and more market share.
But that’s all still to come in the future. Right now, for the time being, the third iteration AMD Ryzen series is easily their most compelling offering yet.
Coffee Lake has been with us now for just over a year and it’s been a rather turbulent period for Intel. AMD’s continued gains over the last 12 – 18 months has marked a change in the marketplace and the first generation Coffee Lake launch perhaps felt a little rushed last time around, especially as Intel was attempted to respond to the opening volley in the now ongoing CPU wars.
This time around I find myself looking over the selection of chips in front of me and the key question on my mind right now is one of “have they managed to extract the platforms potential this time around?”
So, I’ve got 3 different models here all new to the Intel mid-range:
1. The new flagship in the form of the 8 core + Hyper-threading i9 9900K running at 3.6 with a turbo clock of 5GHz out of the (oddly) shaped box.
Chip is being run at all core 5GHz
2. The i7 9700K featuring 8 cores but no Hyper-Threading. The chip is clocked to 3.6GHz and 4.9GHz out of it’s rather more normally shaped box.
Chip is being run at all core 4.9GHz
3. Lastly the 9600K in another boring box. 6 cores, no Hyper-Threading and 3.7GHz with 4.6GHz on the turbo.
Chip is being run at all core 4.6GHz
So, we see some firsts here and some repositioning in the range. The i9’s go mainstream and in this case, we’re seeing a few notable key differences there. The big one is that it’s the first time we’ve seen Intel put out an 8 core mainstream chip. Given we only got our first mainstream 6 core back on the last range refresh, it’s good to see them again being pushed into cramming more value onto the die this time around.
The i9’s are also promising us solder under the heat-spreader this time around, rather than the paste found in models elsewhere in the range, so this should in theory help with overclocking for those wishing to push them a bit more.
The i7 & i5 models this time around are limited to 8 cores and 6 cores respectively with no hyper-threading. Whilst it helps to differentiate between the respective ranges, it is going to come as a bit of a shock to anyone used to the current i5/i7 naming convention. On first thought, we wondered it this meant that we could expect the new 8 core with no HT to be outperformed by the older 6 core + HT models or not, although this could very well come down to specific workloads.
Hyper-Threading by its very nature is based around stealing unused clock cycles to get more work done, so if your workload is already thrashing the CPU, then having Hyper-Threading isn’t really going to have much of an impact. In previous testing I’ve tended to note anywhere between 20% and 60% gains with it turned on depending upon the software in use, so it could be argued that having an extra 2 real cores, could equate to somewhere in the region of 4 or even more lost Hyper-Threads (once again, workload permitting) and we’ve also got to consider clock and IPC gains here, so playing off the 9600K & 9500K’s against their predecessors are going to be certainly interesting.
So lets get down to it.
All the standard tests to start with and nothing unusual going on so far. Whilst they are all clocked fairly close together as far as the cores go, you can note differing amounts of L3 cache on each of the chips, which is no doubt going to help a little in both the single and multi-core benchmarks.
So on with the DAWBench SGA DSP Test and we can see the 3 new chips in Yellow above. Starting with the 9600K the obvious comparison here is against its predecessor and frankly, it’s a little underwhelming with a somewhere between a 1% – 10% increase depending upon the buffer in play and scaling upwards as the buffer size is increased.
The 9700K is next and we get to compare its new design configuration of 8 true cores and no Hyper-threading, which also appears to come off poorly here when compared against the older 8700K with the results showing up a 20% – 40% drop off against Intel’s own previous generation class leader.
The loss of Hyper-threading here really looks to have impacted the testing on the new generation at least under the DAWBench classic test. I do get the thought process here with the chip design itself, as the largest new segment in recent years that seems to have captured the marketing teams imagination has been the rise in content creation users who are live streaming. True cores for that sort of content generation is far more beneficial, especially gamers who wish to live stream at the same time, so I fully understand this design choice, in fact it could be argued that this style of chip would be preferential for anyone working live but for anyone looking for raw performance in the studio it’s all a bit disappointing so far.
The flagship here, however, is no longer the i7 model, but rather the i9 9900K and it’s at least here where things are making rather more sense. It’s the first time that we’ve seen an 8 core in Intel’s mid-range line up and looking at the result above, it looks to have settled itself just above the 7820X from the Intel Enthusiast range (X299) and to be fair, on paper at least it makes perfect sense that it would replace that chip.
It’s the same core count, a few generations newer and clocked higher, so it was always going to be a contender, what it does mean, however, is that once again we see one of Intel’s mid-range chips start to cannibalize their own enthusiast class of chips. In fact, we’ve now reached the point where the lower end i7 enthusiast class has had a dearth of releases over the last 15 months and largely been killed off, wherein the same period AMD has successfully taken a sizable bite out of that part of the market space too and we see them continue to take advantage of Intel’s lack of new competing models.
Indeed, in the chart here sat above it, we see the large core count AMD’s as well as the older generation i9’s outlining exactly what this test is good at, which is small files being spread efficiently over the all the available processing space and honestly, the results here once again don’t really give us any surprises as to how and where the chips are being positioned in the range.
Switching over to the DAWBench VI Kontakt based test we see a more interesting picture as the higher single core clocks appear to give us a welcome boost here. In the one thing, it does really outline for us here is that the Kontakt handling looks to benefit from IPC figures all around.
Having the dedicated cores looks to help when working at tighter ASIO buffer settings on both the 9600K and 9700K, although we can see that this benefit disappears on the 9700K once we slacken that setting off to around the 256 buffer. It appears at this point that the Hyper-threading on the older 8700K finally gets a bit of room to breath and flex it’s stuff once you open up the buffer far enough and this in itself is interesting information.
Thinking about this from a live point of view where you’re aiming for the tightest RTL score and quite likely to be making use of Rompler style libraries, this does outline that going with these new chips that feature all real cores might well pay off for you in this situation. However, if you’re working in the studio, the loss performance at the larger buffer settings, at least in comparison with the older generation might once again prove a little perplexing.
Taking a look at the i9 9900K by comparison and it starts to make more sense again, with it doing rather a good job at once more making the older 7820X chip irrelevant. There is less challenge up this end of the chart from the red team largely due to the lack of solid benchmarks obtained in the last round which you can catch up on if you hit the link.
What this means is that the options here do seem to be becoming even more divided. It’s been pointed out that the higher latency jobs that the Zen chips were excelling at are applicable to all sorts of media editors still and with each additional chip it becomes ever more clear that these continue to remain very scenario dependent, and that Kontakts way of working tends to favour highly clocked cores and larger IPC figures over the workload being spread out over more numerous but slower cores.
Before I round up I just want to throw out a couple of additional charts. I didn’t get a chance to do it with all of them, but I did record the i9 9900K at both stock and at the all core overclock, largely so you can see the difference it can make by setting it to the all core turbo.
Depending on the test and buffer size it’s up to around 8% in these benchmarks, although this can grow as you use more complex chains of processing in your projects. A chip is only really as strong as it’s the weakest core, as once you max out any given core you begin to run the risk of audio artefacts creeping in.
I mention this specifically with the i9 9900K as a lot of premium boards have been shipping with 5GHz profiles now for a few years and it’s rather easy to hit the results I’m showing above with a halfway decent cooler solution. Above that, you’ll probably want to move to a water cooler solutions with 5.2GHz looking to be the target for anyone wanting to really drive it.
I’ll also note that the i7 9700K was running comfortably just below 80 degrees by the time I all core turbo’d it, whereas the i5 9600K was sitting nicely around the 60 degrees mark even with Prime 95 absolutely thrashing it, so I reckon for anyone wanting true cores only, you might have quite a chunk of headroom there to play with if you want to tinker with it.
So, overall, what are my final thoughts?
The i5 9600K and i7 9700K both feel like a step backwards for our part of the market to a degree. Sure, they have some strengths and I’ll come back to the example of low latency machines for live use again being a prospective user base, but their value proposition in comparison to other chips already out there is where it really falls over in the studio.
Having a sideways move in the overall performance is a little disappointing but we’re seeing an initial street price on the i7 9600K of around £350 against the i5 8600K historical showing of around £250. Similarly the 8700K was around £350 for most of its lifecycle and the 9700K sits at £499 at launch, so we’re seeing price increases with each of those ranges, although I suspect as supply catches up with the initial demand we may find some price realignment over the coming months and I wouldn’t be all that surprised to see the new chips reflect older price points once the market stabilises. This is a fairly common occurrence with any new chip release, but admittedly it leaves me feeling a rather underwhelmed given all I’ve discussed already from a performance point of view.
The i9 9900K, on the other hand, replaces the 7820X which spent most of its lifecycle between £400 – £500 in the UK and the i9 9900K has landed at £599. Assuming it’s going to drift over the coming months we’re still essentially looking at £100 mark up over the older model.
The DAWBench classic test here shows us mixed gains depending upon the workload and it’s up against the AMD’s which manage to still outperform it within this test. By contrast, the DAWBench VI test flips it with it outperforming the chips on the chart and keeping in mind the Threadripper results previously.
So, does even the i9 9900K make sense? Well, yes, it’s the one that really does here. With the change to the Z390 platform, we see a cost saving over the older X299 platform complete with a more advanced feature set. With the cost differences between boards often totalling and surpassing the £100 amount, the overall cost of going with an i9 9900K over an i7 7820X looks to come out in the i9’s favour and that’s before considering the performance gains it offers.
The additional good news here is that the other previous sticking point with the Z390 platform for some users is it’s restricted memory capabilities, as the four slots could only handle a maximum of 64GB. We’ve seen an announcement recently however that they are going to start offering double stacked DIMM’s over the coming months to support this platform, so hopefully, it shouldn’t be all that long until these boards can handle 128GB as well.
Overall this feels like Intel’s real response to AMD’s advances last year although given the swift execution and release of the second generation Zen chips, perhaps they are still a tad on the backfoot here. It’s kinda where Coffee Lake should have been last time around and it’s of course good to see more power in the mid-range. It does leave me questioning where exactly it’s going to leave the enthusiast class, as anything less than an i9 on that platform is going to prove to be poor value at this point and given the age of that platform I really can’t help but hope that the next Intel enthusiast platform can’t be all that far off now.
It feels like this is the repostioning that Intel needed to happen to put it’s own range back into some context, but it may not prove to be the change that everyone was looking for, at least in our small corner of the market.
At the very least here the i9 9900K emerges as a rather strong contender for us audio users and I suspect any other i9 based refresh over the coming months is going to make this all make a whole load more sense when the dust settles. But with AMD already promising updates to its own platform and announced tweaks for their memory balancing promised over the next few weeks Intel may have to work even harder over the coming months.
It’s been a while now since we sat down and took a good look at any of the mobile processor releases. It’s a market segment that has been crawling along slowly in recent years with minor incremental upgrades and having checked out the last couple of mobile flagship chips, it was obvious that with each generation we were seeing those refinements focused more on improved power handling rather than trying to extract every drop of performance.
Admittedly in the shape of last years 7700HQ they perhaps got closer to the equivalent desktop model than any generation previously managed to achieve in previous years. Whilst welcome, this was really more a symptom of stagnating desktop speeds, rather than any miraculous explosion in mobile power. Whilst the chip itself was a great performer, the fact that it got there by eaking a few percent generation, upon generation… well, by the time we got there, it was all ultimately a little underwhelming.
But now, thanks to AMD’s continued push in the current desktop CPU war, we’ve seen Coffee Lake emerge from the blue camp and now we’re going to get hands-on with the mobile equivalent.
The i7 8750H we have here today is a 6 core with hyperthreading, running with a base clock of 2.20GHz and a max single core turbo frequency of 4.10GHz and leads the way when it comes to mobile i7’s.
Just as a side note before we kick this off, there is another chip above this, in the form of the i9 8950HK which is also 6 cores + hyperthreading but with another 500MHz on the clock. I mention this as Apple has just announced it’s going into the flagship Macbook later in the year, we do have them due to land with us in PC laptops as well in a month or two, so I will be benchmarking that when it arrives with us too.
Already in the very first screenshot above, we’ve inadvertently tipped a nod to what’s going to be the crux of this write-up. The clock speeds are somewhat wide-ranging, to say the least. On paper, there is almost 2GHz worth of clock between the base and turbo clocks. Keeping in mind that it’s single core turbo only up to the 4.1GHz and suddenly you find yourself asking about what the rest of the cores will be doing at that point.
Quickly throwing CPUid on and running it returns us a result of 3890GHz, which if it had been all cores would have been rather impressive for a mobile chip. In this instance, however, I wasn’t doing anything other than sitting on the desktop when this snapshot was taken. The score you see is the highest core score and it’s hyper-thread was showing as matching it.
The rest of the cores, however, well, they were largely unused and sat around the baseline 2.0GHz – 2.6GHz level. What we really want to know of course is what sort of average speed we can expect from all the cores being kicked up to 100% load.
Any longer term followers of these pieces will already be well aware that my preference for testing involves doing an all core overclock or in more basic terms, I tend to favour locking all the cores to the single core max turbo speed.
Yes, it’s an overclock, but it’s one that the chips are kind of rated to. Admittedly, it’s not rated to quite the level we’re working at here, but hey… that’s why we favour some chunky aftermarket cooling in those systems to make everything alright.
Except, when dealing with laptops we can’t go strapping a large chunk of copper to it, in fact, a lot of the tweaks we would wish to make on a desktop system, simply don’t exist in laptop land. Often with laptops, it’s a case of a unit either working out of the box or with a few basic tweaks or otherwise due to drivers or hardware choices it’ll never really be suitable for the sort of real-time processing required for working with audio.
I grabbed a copy of AIDA64 and gave it a quick run, at least enough to force the CPU to load up all the cores and simulate a heavy workload and how those cores would respond to such a load.
What we see here is all the cores being pushed, with the highest speed core running about 3000MHz in the screenshot. Monitoring it in real-time it was bouncing around 3000 – 3200MHz range. Similarly, at the lower end, we see a core sat around 2600MHz and this would bounce up to around 2800MHz at times.
So, where’s our 4.1GHz turbo? Well, that single core turbo only really achieves such lofty heights if the rest of the cores are sat around doing nothing. In the interest of load balancing and heat management should more than a couple of cores need to be turbo’d then all of them will shift to a safer average.
You see on desktops with chips that have a range of a 3.8Ghz to 4.3Ghz sitting mostly around the 4GHz level and is why I tend to notch them all up to 4.3GHz in that sort of situation. It ensures no sudden ramping up and down and ensures we get some nice stable but optimized performance out of a setup without taking any major risks.
With these laptops, we don’t get those sort of options, nor I suspect would heat permit us to be quite so aggressive with the settings. Whilst the headline here of 6 cores is fairly unprecedented within a consumer level laptop, and certainly, on a fairly mainstream chipset, it’s a little bit smoke and mirrors with how it’s presented if you don’t fully understand how the turbo presents itself.
The potential issues it presents to us are in the form of the ASIO buffer. With whole channels being assigned to each given thread, we ideally want the performance level across all cores to be as equal as possible. For audio systems the overall performance can often be limited by how powerful the weakest core is, this is something we need to keep in mind heading into this results roundup.
With the DAWBench DSP test, we’re using the SGA1566 variant running under Reaper for this generation of testing and we see the 8750H performing around the level of an entry-level desktop i5 chip. In comparison to previous generations, this isn’t overly surprising as historically the mobile i7 CPU of any given generation tends to sit around the level of the leading i5 desktop solution in the performance stakes.
Running the DAWBench Vi test we see similar results here too, with the chip coming in just behind the i5 8400 once again. It’s a reasonable showing and in reality, we’re probably looking at maybe a 25% gain over the last generation flagship mobile chip.
Given that we’ve seen 3 or 4 generations now where 10% gains year on year has been the standard then normally we’d be pretty happy about seeing a jump of 25% coming out of single refresh and indeed it’s certainly a far better value option than the model it replaced.
However, we saw a jump of 40% on the desktop last year and frankly all we’re doing here is shoehorning in another couple of cores, rather than bringing in a whole new platform. It looks like they’ve played it cautiously by not pushing the chip too much and the temperatures do seem a little on the safe side even under stress testing.
To be fair to them, this is pretty much what the average user wants from a laptop chip, giving us quick bursts to deal with any sudden intensive activity, but otherwise, aggressive power-saving to ensure a long battery life when on the move.
Which of course, is pretty much the opposite of what most of us power users want, as we tend to be looking for a high-performance desktop replacement solution. It’s clear there is a bit of headroom here which will no doubt be leveraged over the next couple of range refreshes, it’s just a little bit frustrating that we can’t extract a bit more of it right now ourselves.
With all that said I suspect that after seeing the CPU war kick expectations up a notch as it did last year, that I may have headed into this with slightly higher expectations than normal this time around.
Overall, the final result here is a solid release with above average generational gains that I’m sure will be more than appreciated by anyone who is in the market for a new model this year.
Looking back over the rather hectic first few months of 2018 in the PC industry, it’s clear that a lot has changed since the last CPU benchmark session late last year. In the space of 6 months, we’ve seen security concerns and the resulting software patches swing windows performance back and forth as they’ve arrived with us thick and fast. I’ve largely been trying to wait it out and see how the dust settles in the interim, but with the release of new hardware, it’s time to get back into it.
My last bench was based on a build of windows frozen in late 2016 and associated drivers have gone through a number of revisions during the time since, so with the launch of Ryzen 2 it’s very much the time for an all-new software bench to be set up.
Cubase has moved from 8.0 to 9.5 and Reaper too has advanced a number of builds to 5.79 at the point of testing being initiated. This time around we also see the introduction of the newer SGA build of the DSP test, replacing the older DAWBench DSP test and the latest build of the DAWBench Vi test too.
Before getting underway please note that the new results are in no way comparable to the older charts, other than looking at the rough performance curve differences between certain chips which do appear to be in line with prior results. They are certainly not directly value comparable with all the bench changes that have taken place and it’s always key to keep the playing field as level as possible when doing these comparisons.
This time around I’ve tried to run each chip at its turbo frequency across all cores once again. Modern chips will tend to be rated with both a stock clock and a turbo clock, although what isn’t always clear is that the max turbo rating is often only over 1 or 2 cores by default.
Historically it’s been relatively easy to run most CPUs with those cores being pushed and locked off at the turbo max. However, in the event of a platform being pushed too hard, then this isn’t always viable. For instance, I saw this in testing some of the higher end i9’s, where I would choose to all core at 4.1GHz, rather than leave it at stock and let it 2 core to 4.2GHz with a far lower average leaving me open to possible audio interruptions due to clocking.
It’s also the case here with the 2700X where the overclock would hang the machine if trying to push everything to the 4.2GHz rated turbo speed. Instead, I tried to clock it up both manually and using the AMD tool, both of which topped out around 4.1GHz. After speaking to my gaming team and realising this is fairly common (a number of other reviews have picked up on it as well) I ended up using the utility to set everything up with the slightly lower all core turbo at 4.1GHz and testing there.
The 2700X here slots in behind the 8700K which leads by just short of 20% extra overhead at the tightest buffer setting, and both chips look to scale upwards in a similar pattern as you increase the buffer setting. The 8700K seems to be the most suitable comparison here as the price point (at time of writing in the UK) is around £30 more or about 10% more than the cost of the 2700X at launch.
The story of the performance curve scaling looks to repeat when we come to examine the 2600X and by comparison the 8600K from Intel. However, this time around the results are reversed with the Intel chip lagging behind the AMD model by about 5% across the buffer settings whilst the AMD costs around £25 less which makes it roughly 12% cheaper at launch.
So a strong showing for the DSP test, where we’re mostly throwing a load of small VST plugs at the CPU. The other test we run here is the DAWBench Vi test, based on stacking up Kontakt instances which allows us to test the memory response through sample loading along the CPU as we see with the DSP test.
With the Gen1 Ryzens, we saw them perform worse here overall, we suspect down to the memory response and performance. AMD saw similar performance issues across various segments with certain core software ranging from gaming to video processing and the was a lot of noise and multiple attempts to improve this over the life cycle of the chip. One suggestion we saw pay off to some extent in other segments (once again, video and gaming made notable gains) was to move over to using faster memory speeds.
We didn’t see any improvement here for audio applications, although in this instance all testing (both Intel and AMD) has been carried out with 3200MHz RAM, in the interest of trying to maximize the performance where we can as well as keeping things level in that regard.
The headline figure this time around suggests a rough 10% improvement to the IPC (instruction per clock) scores, which of course is promising, although notably, this is where AMD was lagging behind Intel even after bringing Ryzen to the market. In the interim we’ve seen the Coffee Lake launch, which also improved Intel’s IPC scores meaning that whilst AMD has been catching up rapidly of late, Intel does seem to remain intent on clawing back the lead on each successive launch.
So looking it over this time, both the 2700X and 2600X look to fall behind their Intel comparable chips. The 2600X is roughly 20% lower than the 8600K this time although it’s moving up to the 2700X that proves more interesting, if only because it helps to outline what’s occurred between the two generation releases.
The older 1800X stood up well against the old 7700K edition at its launch, and indeed that extra 10% IPC boost we see this time may well have given it a solid lead over the Intel, if not for the Coffee Lake release in the interim in the shape of 8700K which pulls off a convincing lead at this price point currently. Indeed, not only does the 8700K show gains over the previous 7700K chip, but it also overtakes the more expensive although admittedly older, entry-level 6 core 7800X on the Intel’s own enthusiast platform.
The 2700X is comparable to the 7800X at a far keener price point, although as noted the 7800X more or at least exists as a bit of an oddity by this point, even within it’s own range, so whilst this might have been a more impressive comparison 12 months ago, now it feels like they may have landed it just a few months too late to make serious waves.
Speaking from an audio point of view, the chips are good, but not exactly groundbreaking. If you also work in another segment where the AMD’s are known to have strengths, then the good news here is that they offer reasonable bang per buck for audio and hold their ground well as far as giving you performance at those price points.
But once again, they don’t appear to be breaking any performance to cost records overall at least for the audio market. They’ve got solid gains, but then again so has Intel last time around and this is often how it goes with CPU’s when we have the firms battling it out for market share. Not that this is a bad thing, certainly it benefits the end user, whichever your choice of platform.
As a closing note, I saw in my early generation 1 testing a number of interfaces fail to enumerate on the AMD boards. I reported this to a few manufacturers and interestingly the device that first showed up problems on the X370 boards the first time around (in this instance a UAD Twin USB), is behaving superbly on the X470 platform.
Whilst this is a sample size of approximately “1” unit in a range, it does point towards a reconsidering of the USB subsystem this time around, which can only be a positive. Anyone who was perhaps considering this the Ryzen 1 platform, but found themselves out of luck with interface compatibility, might well fare far better this time around. Obviously, if the were problems known before then please do check with the manufacturers your considering for the latest compatibility notes in each instance.
Looking forward there is a rumoured 2800X flagship Ryzen which is already well discussed but as yet no release date on the horizon. The has been already been discussion, rumours and even some testing and validation leaks out in the wild that suggest that Intel might be sitting on an 8 core Coffee Lake. It would certainly make sense for them to be keeping such a chip in the wings waiting on them seeing the public reaction to these new AMD chips. Similarly, it might turn out that the 2800X will be held back as an answer for those rumoured Intel models should they suddenly appear on the market in the near future.
To wrap it up, essentially we’re in peak rumour season and I’ve no doubt we’ll continue to see a pattern of one-upmanship for the foreseeable future which continues to be a very positive thing indeed. If you need to buy a system today, then the charts should help guide you, although if you’re not in rush right now, I’m sure the will be some interesting hardware to also consider coming over the year ahead.
No doubt, the hottest topic in I.T. at the start of 2018 continues to be the CPU security risks that have come to light as 2017 came to a close.
Otherwise known as “Spectre” and “Meltdown ” an exhaustive amount of information has been written already in regards to how these design choices can lead to data being accessed within the computer by processes or other code that shouldn’t have access to it, potentially leaving the system open to attacks by malicious code run on the computer.
For instance one of the more concerning attack vectors in this scenario are servers hosting multiple customers on one system, and in a world where it might be common to hear about many virtual machines being used in a hosting environment in order to keep them separate and secure, allowing this type of code to access the data with poor security in place opens up the possibility of transaction details, passwords and other customer records in a manner that has obviously raised a large amount of concern in both security professionals and end consumers alike.
Off the back of this have emerged the patches and updates required to solve the issue, and along with those are some rather alarming headline figures regarding performance levels potentially taking a hit, with claims of anywhere up to 30% overhead being eaten away by certain types of workload.
As there are many great resources already explaining this including this one here that can help outline what is going on, I’m not going to delve too much into the background of the issues, rather focus on the results of the updates being applied.
We’re going to look at both the Microsoft patch at a software level and test the BIOS update released to support it. There are two issues here with Meltdown and Spectre and there happens to be two variants of Spectre, one of which can be handled at the software level, with the other requiring the microcode update applied via a BIOS update.
Microsoft has, of course, released their own advisory notes which are certainly worth a review too and available here. At this time it is advised that Meltdown and all Spectre variants can both affect Intel CPU’s and some ARM compatible mobile chips, whereas AMD is only affected by the Spectre variants with AMD themselves having just issued an updated advisement at the time of writing which can be found here. This is also largely an OS platform agnostic issue with Microsoft, Apple, Linux and even mobile OS’s all having the potential to be affected and over the last few weeks rapidly deploying updates and patches to their users.
At this point, I’m just going to quote a portion taken from the Microsoft link above verbatim, as it outlines the performance concerns we’re going to look at today. Keep in mind that in the text below “variant 1 & 2” are both referring to the Spectre issues, whereas Meltdown is referred to as simply “variant 3”.
One of the questions for all these fixes is the impact they could have on the performance of both PCs and servers. It is important to note that many of the benchmarks published so far do not include both OS and silicon updates. We’re performing our own sets of benchmarks and will publish them when complete, but I also want to note that we are simultaneously working on further refining our work to tune performance. In general, our experience is that Variant 1 and Variant 3 mitigations have minimal performance impact, while Variant 2 remediation, including OS and microcode, has a performance impact.
Here is the summary of what we have found so far:
With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we don’t expect most users to notice a change because these percentages are reflected in milliseconds.
With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance. This is why you want to be careful to evaluate the risk of untrusted code for each Windows Server instance, and balance the security versus performance tradeoff for your environment.
For context, on newer CPUs such as on Skylake and beyond, Intel has refined the instructions used to disable branch speculation to be more specific to indirect branches, reducing the overall performance penalty of the Spectre mitigation. Older versions of Windows have a larger performance impact because Windows 7 and Windows 8 have more user-kernel transitions because of legacy design decisions, such as all font rendering taking place in the kernel. We will publish data on benchmark performance in the weeks ahead.
The testing outlined here today is based on current hardware and Windows 10. Specifically, the board is an Asus Z370 Prime A, running on a Samsung PM961 M.2. drive, with a secondary small PNY SSD attached. The CPU is an i5 8600 and the is 16GB of memory in the system.
Software wise updates for windows were completed right up to the 01/01/18 point and the patch from Microsoft to address this was released on 03/01/18 and is named “KB4056892”. I start the testing with the 605 BIOS from late 2017 and move through to the 606 BIOS designed to address the microcode update specified by Intel.
Early reports have suggested a hit to the drive subsystem, so at each stage, I’m going to test this and of course, I’ll be monitoring the CPU performance as each step is applied. Also keep in mind that as outlined in the Microsoft advisory above, different generations of hardware and solutions from different suppliers will be affected differently, but as Intel is suggested as being the hardest hit by these problems, it makes sense to examine a current generation to start with.
Going into this, I was hopeful that we wouldn’t be expecting to see a whole load of processing power lost simply due to the already public explanations of how the flaw could potentially affect the system didn’t read as being one that should majorly impact the way an audio system handles itself.
Largely it’s played out as expected, as when you’re working away within your sequencer the ASIO driver is there doing its best to keep itself as a priority and generally, if the system is tuned to work well for music, the shouldn’t be a million programs in the background that should be affected by this and causing the update to steal processing time. So, given we’re not running the sort of a server related workloads that I would expect to cause too much of an upset here, I was fairly confident that the impact shouldn’t be as bad as some suggestions had made out and largely on the processing side it plays out like that.
However, prior to starting the testing, it was reported that storage subsystems were taking a hit due to these patches and that of course demanded that we take a look at it along the way too. Starting with the worst news first, those previous reports are very much on the ball. I had two drives connected and below we see the first set of results taken from a Samsung M.2. SM961 model drive.
To help give you a little more background on what’s being tested here, each test should be as follows:
Seq Q32T1 – sequential read/ write with multiple threads and queues
4K Q32T1 – random read/ write with multiple threads and queues
Seq – sequential read/ write with a single queue and thread
4K – random read/ write with a single queue and thread.
The is no doubt a performance hit here to the smaller 4k files which are amplified as more threads are taken up to handle the workload in the 4K Q32T1 test. On the flip side of this is that the sequential handling seems to either escape relatively unscathed and in some instances even improved to some degree, so there is some trade-off here depending on the workload it’s handling.
The gains did confuse me at first and whilst first sifting through the data I started to wonder if perhaps given we were running off the OS drive, and perhaps other services had skewed it slightly. Thankfully, I also had a project SDD hooked up to the system as well, so we can compare a second data point against the first.
The 4k results still show a decrease and the sequential once again hold fairly steady with a few read gains, so it looks like some rebalancing to the performance levels has taken place here too, whether intentional or not.
The DAWBench testing, on the other hand with the DSP test, ends up with a more positive result. This time around I’ve pulled out the newer SGA based DSP test, as well as the Kontakt based DAWBench VI test and both were run within Reaper.
The result of the DSP test which concentrates on loading the CPU up shows little difference that doesn’t fall within the margin of error & variance. It should also be noted that the CPU was running at 99% CPU load when it topped out, so we don’t appear to be losing any overhead here in that regard.
With the Kontakt based DAWBench VI test, we’re seeing anything between 5% and 8% overhead reduction depending on the ASIO buffer, with the tightest 64 buffer suffering after each update whereas the looser settings coped better with the software update before taking a small hit when we get up to the 256 buffer.
Ultimately the concern here is how will it impact you in real terms?
The minor loss of overhead on the second testing set was from a Kontakt heavy project and the outcome from the drive tests would suggest that anyone with sample library that has a heavy reliance on disk streaming may wish to be careful here with any projects that are already on the edge prior to the update being applied.
I also timed that project being loaded across all 3 states of the update process as I went with the baseline time frame to open the project being 20 seconds. After the software update, we didn’t see a change in this time span, with the project still taking 20 seconds to open. However, the BIOS update once applied along with the OS update added 2 seconds to this giving us roughly a 10% increase in the project load time.
So at this time, whilst any performance is certainly not welcome, we’re not seeing quite the huge skew in the performance stakes that has been touted thankfully, and certainly well short of the 30% figure that was being suggested initially for the CPU hit.
There have been suggestions by Microsoft that older generations might be more severely affected and from the description of how it affects servers I suspect that we may well see that 30% figure and even higher under certain workloads in server environments, but I suspect that it’ll be more centered around the database or virtual machine server workstation segments than the creative workstation user base.
Outside of our small corner of the world, TechSpot has been running a series of tests since the news broke and it’s interesting to see other software outside of the audio workstation environment seems to be largely behaving the same for a lot of end users, as are the storage setups that they have tested. If you’d like to read through those you can do so here.
The issue was discovered over the course of 2017 back but largely kept under wraps so it couldn’t be exploited at the time. However, the existence of the problem leaked before the NDA was lifted and feels like a few solutions that have been pushed out in the days since may have been a little rushed in order to stop anyone more unethical capitalizing upon it.
As such, I would expect performance to bounce around over the next few months as they test, tweak and release new drivers, firmware and BIOS solutions. The concern right now for firms is ensuring that systems around the world are secure and I would expect there to be a period of optimization to follow once they have removed the risk of malware or worse impacting the end user.
Thankfully, it’s possible to remove the patch after you apply it, so in a worst case scenario you can still revert back and block it should it have a more adverse effect upon your system, although it will then leave you open to possible attacks. Of course, leaving the machine offline will obviously protect you, but then that can be a hard thing to do in a modern studio where software maintenance and remote collaboration are both almost daily requirements for many users.
However you choose to proceed, will no doubt be system and situation specific and I suspect as updates appear the best practice for your system may change over the coming months. Certainly, the best advice I can offer here is to keep your eye on how this develops, make the choices that keep you secure without hampering your workflow and review the situation going forward to see if further optimizations can help restore the situation to pre-patch levels as a resolve for the problem is worked upon by both the hardware and software providers.
Today we have a few more models from the Intel i9 range on the desk in the shape of the 14 core 7940X and the 7960X. I was hopeful that the 18 core would be joining them as well this time around, but currently, another team here have their hands on it so it may prove to be a few weeks more until I get a chance to sit down and test that one.
Now I’m not too disappointed about this as for me and possibly the more regular readers of my musings, the 16 core we have on the desk today already is threatening to be the upper ceiling for effective audio use.
The reason for this is that I’ve yet to knowingly come across a sequencer that can address more than 32 threads effectively for audio handling under ASIO. These chips offer 28 and 32 threads respectively as they are hyper-threaded, so unless something has changed at a software level that I’ve missed (and please contact me if so), then I suspect at this time the 16 core chip may well be well placed to max the current generation of sequencers.
Of course, when I get a moment and access to the larger chip, I’ll give it a proper look over to examine this in more depth, but for the time being on with the show!
Both chips this time around are advising a 165W TDP figure, which is up from the 140W TDP quoted back on the 7920X we looked at a month or two back. The TDP figure itself is supposed to be an estimate of the power usage under regular workloads, rather than peak performance under load. This helps to explain how a 14 core and 16 core chip can both share the same TDP rating, as the 14 core has a higher base clock than the 16 core to compensate. So in this instance, it appears that they have to some degree picked the TDP and worked backward to establish the highest performing, clocks at that given power profile point.
Once the system itself starts to push the turbo, or when you start to overclock the chip the power draw will start to rise quite rapidly. In this instance, I’m working with my normal air cooler of choice for this sort of system in the shape of the BeQuiet Dark Rock Pro 3 which is rated at 250W TDP. Water-loop coolers or air coolers with more aggressive fan profiles will be able to take this further, but as is always a concern for studio users we have to consider the balancing of noise and performance too.
Much like the 7920X, we looked at previously, the chips are both rated to a 4.2GHz max two core turbo, with staggered clocks running slower on the other cores. I took a shot at running all cores at 4.2GHz but like the 7920X before it we could only hit that on a couple of cores before heat throttling would pull them back again.
Just like the 7920X again however if we pull both of these chips back by 100MHz per core (in this instance both to 4.1GHz) they prove to be stable over hours of stress testing and certainly within the temp limits we like to see here, so with that in mind we’re going to test at this point as it’s certainly achievable as an everyday setting.
As always first up is the CPUid chip info page and benchmarks along with the Geekbench results.
Intel i9 7940X @ 4.1GHz
Intel i9 7960X @ 4.1GHz
Both chips are clocked to the same level and the per-core score here reflects that. The multi-core score, of course, offers a leap from one chip to the other as you’d expect from throwing a few more cores into the equation.
The DAWBench classic and newer DSP test with Kontakt follow this and once again as there isn’t a whole lot I can add to this.
The added cores give us improvements across both of these chips as we’ve already seen in the more general purpose tests. The 7960X does appear to offer a slightly better performance curve at the higher buffer rates, which I suspect could be attributed to the increase in the cache but otherwise, it all scales pretty much as we’d expect.
Given the 7940X maintains the roughly £100 per extra core figure (when compared to the 7920X) at current pricing that Intel was aiming for at launch, it does seem to offer a similar sort of value proposition as the smaller i9’s just in this case more is more. The 7960X raises this to roughly £125 per core extra over the 7940X at current pricing, so a bit of cost creep there but certainly not as pricey as we’ve sometimes seen over the years on the higher end chips in the range.
The main concern initially was certainly regarding heat, but it looks like the continued refinement of the silicon since we saw the first i9 batches a few months ago has given them time to get ahead of this and ensure that the chips do well out of the box given adequate cooling.
With the launch of the CoffeeLake’s in the midrange, some of the value of the lower end enthusiast chips appear to have quickly become questionable, but the i9 range above it continues to offer performance levels henceforth unseen by Intel. The’s a lot of performance here, although the price matches it accordingly and we often find ourselves at this time where more midrange level systems are good enough for the majority of users.
However, for the power user with more exhaustive requirements who find that they can still manage to leverage every last drop of power from any system they get their hands on, I’m sure there will plenty here to peak your interest.
I’ll be honest, as far as this chipset naming scheme goes it feels that we might be starting to run out of sensible candidates. The Englishman in me wants to eschew this platform completely and hold out for the inevitable lake of Tea that is no doubt on the way. But alas the benchmarking has bean done and it’s too latte to skip over it now.
*Ahem* sorry, I think it’s almost out of my system now.
Right, where was I?
Time To Wake Up and Smell The….
Coffee Lake has been a blip on the horizon for quite a while now, and the promise of more cores in the middle and lower end CPU brackets whilst inevitable has no doubt taken a bit longer than some of us might have expected.
Is it a knee-jerk reaction to the AMD’s popular releases earlier this year? I suspect the platform itself isn’t, as it takes a lot more than 6 months to put together a new chipset and CPU range but certainly it feels like this new hardware selection might be hitting the shelf a little earlier than perhaps was originally planned.
Currently its clear that we’ve had a few generations now where the CPU’s haven’t really made any major gains other than silicon refinement and our clock speeds haven’t exceeded 5GHz from the Intel factory (of course, the more ambitious overclockers may have had other ideas), the obvious next move for offering more power in the range would be to stack up more cores much like the server-based bredrin in the Xeon range.
What is undeniable is that it certainly appears even to the casual observer that the competitor’s recent resurgence has forced Intel’s hand somewhat and very possibly accelerated the release schedule of the models being discussed here.
I say this as the introduction of the new range and i7 8700K specifically that we’re looking at today highlights some interesting oddities in the current lineup that could be in danger of making some of the more recent enthusiast chips look a little bit redundant.
This platform as a whole isn’t just about an i7 refresh though, rather we’re seeing upgrades to the mainstream i7’s, the i5’s and the i3’s which we’ll get on the bench over the coming weeks.
The i7’s have gained 2 additional physical cores and still have the hyperthreading meaning 12 logical cores total.
The i5’s have 6 cores and no hyperthreading.
The i3’s have 4 cores and no hyperthreading.
Positioning wise Intel’s own suggestions have focused towards the i5’s being pushed for gaming and streaming with up to 4 real physical cores being preferred for games and then a couple extra to handle the OS and streaming. The i3’s keep their traditional entry-level home office and media center sort of positioning that we’ve come to expect over the years and then that gives us the 6 core i7’s sat at the top of the pile of the more mainstream chip options.
Intel traditionally has always found itself a little lost when trying to market 6 cores or more. They know how to do it with servers where the software will lap up the parallelization capabilities of such CPUs with ease. But when it comes to the general public just how many regular users have had the need to leverage all those cores or indeed run software that can do it effectively?
It’s why in recent years there has been a marked move towards pushing these sorts of chips to content creators and offering the ability to provide the resources that those sort of users tend to benefit from. It’s the audio and video producers, editors, writers and artists that tend to benefit from these sorts of advances.
In short, very likely you dear reader.
Ok, so let’s take a look at some data.
At base clock rates the chip itself is sold as a 6 core with Hyper-threading and runs with a clock speed of 3.7GHz and a max turbo of 4.7GHz. For testing, I’ve locked off all the cores to the turbo max and tested with a Dark Rock 3 after testing various models before starting. With the cooler in hand, it was bouncing around 75 degrees after a few hours torture testing which is great. I did try running it around the 5GHz mark, which was easy to do and perfectly stable, although with the setup I had it was on the tipping point of overheating. If you updated it to a water cooling loop I reckon you’ll have this running fine around the 5GHz and indeed I did for some of the testing period with no real issues, although I did notice that the voltages and heat start to creep up rapidly past the 4.7GHz point.
The Geekbench 4 results show us some interesting and even slightly unexpected results. With the previous generation 7700K being clocked to 4.5GHz when I benched it and the 8700K being run at 4.7GHz I was expecting to see gains on the single core score as well as the increase in the multicore score. It’s only a few percent lower and I did retest a couple of times and found that this was repeatable and I had the results confirmed by another colleague.
The multicore score, on the other hand, shows the gains that this chip is all about with it not only exceeding the previous generation as you would expect with more cores being available. The gains here, in fact, highlight something I was already thinking about earlier in the year when the enthusiast i7’s got a refresh, in that this chip looks to not only match the 7800X found in the top end range but somewhat exceeding its capabilities at a lower overall price point.
In the testing above both the DAWBench DSP and the DAWBench vi tests continue to reflect this too, effectively raising questions as to the point of that entry-level 7800X in the enthusiast range.
The is almost price parity between the 7800X and 8700K at launch although the X299 boards tend to come in around £50 to £100 or more than the boards we’re seeing in the Z370 range. You do of course get extra memory slots in the X299 range, but then you can still mount 64GB on the mid-range board which for a lot of users is likely to be enough for the lifecycle of any new machine.
You also get an onboard GPU solution with the 8700K and if anything has been proven over the recent Intel generations, its that those onboard GPU solutions they offer are pretty good in the studio these days, perhaps also offering additional value to any new system build.
Grinding Out A Conclusion
I’m sure pricing from both sides will be competitive over the coming months as they aim to steal market share from each other. So with that in mind, it’s handy to keep these metrics in mind, along with the current market pricing at your time of purchase in order to make your own informed choice. I will say that at this point Intel has done well to reposition themselves after AMD’s strongest year in a very long time, although really their biggest achievement here looks to have been cannibalizing part of their own range in the process.
That, of course, is by no means is a complaint as when pricing is smashed like this then the biggest winner out there is the buying public and that truly is a marvelous thing. Comparing the 8700K to the 7700K on Geekbench alone shows us a 50% improvement in performance overheads for a tiny bit more than the previous generation cost, which frankly is the sort of generation on generation improvement that we would all like to be seeing every couple of years, rather than the 10% extra every generation we’ve been seeing of late.
Whether you choose to go with an Intel or an AMD for your next upgrade, we’ve seen that the performance gains for your money are likely to be pretty great this time around on both platforms. If your current system is more than 3 or 4 years old then it’s even more likely that the will be a pretty strong upgrade path open to you when you do finally choose to take that jump. With hints of Ryzen 2 being on its way next year from AMD and the likelihood that Intel would never leave any new release unchallenged, we could be in for an interesting 2018 too!
Back in June this year we took a look at the first i9 CPU model with the launch of the i9 7900X. Intel has since followed on from that with the rest of the i9 chips receiving a paper launch back in late August and with the promise of those CPU’s making it into the publics hands shortly afterward. Since then we’ve seen the first stock start to arrive with us here in Scan and we’ve now had a chance to sit down and test the first of this extended i9 range in the shape of the i9 7920X.
The CPU itself is 12 cores along with hyper-threading, offering us a total of 24 logical cores to play with. The base clock of the chip is 2.9GHz and a max turbo frequency of 4.30GHz with a reported 140W TDP which is much in line with the rest of the chips below it in the enthusiast range. Running at that base clock speed the chip is 400MHz slower per core than the 10 core edition 7900X. So if you add up all the available cores running at those clock speeds (12 X 2900 vs 10 X 3300) and compare the two chips on paper, then the looks to be less than 2GHz total available overhead separating them but still in the 7920X’s favor.
So looking at it that way, why would you pay the premium £200 for the 12 core? Well interestingly both CPU’s claim to be able to turbo to the same max clock rating of 4.3GHz, although it should be noted that turbo is designed to factor in power usage and heat generation too, so if your cooling isn’t up to the job then you shouldn’t expect it to be hitting such heady heights constantly and whilst I’m concerned that I may be sounding like a broken record by this point, as with all the high-end CPU releases this year you should be taking care with your cooling selection in order to ensure you get the maximum amount of performance from your chip.
Of course, the last thing we want to see is the power states throttling the chip in use and hampering our testing, so as always we’ve ensured decent cooling but aimed to keep the noise levels reasonable where we can. Normally we’d look to tweak it up to max turbo and lock it off, whilst keeping those temperatures in check and ensuring the system will be able to deliver a constant performance return for your needs.
However, in this case, I’ve not taken it quite all the way to the turbo max, choosing to keep it held back slightly at 4.2GHz across all cores. I was finding that the CPU would only ever bounce of 4.3GHz when left to work under its own optimized settings and on the sort of air cooling we tend to favour it wouldn’t quite maintain the 4.3GHz that was achieved with the 7900X in the last round of testing without occasionally throttling back. It will, however, do it on an AIO water loop cooler, although you’re adding another higher speed fan in that scenario and I didn’t feel the tradeoff was worth it personally, but certainly worth considering for anyone lucky to have a separate machine and control room where a bit more noise would go unnoticed.
Just as a note at this point, if you run it at stock and let it work its own turbo settings then you can expect an idle temperature around 40 degrees and under heavy load it still should be keeping it under 80 degrees on average which is acceptable and certainly better than we suspected around the time of the 7900X launch. However, I was seeing the P-states raising and dropping the core clock speeds in order to keep its power usage down and upon running Geekbench and comparing the results that my 4.2GHz on all cores setting gave us an additional 2000 points (around 7% increase) over the turbo to 4.3GHz default setting found in the stock configuration. My own temps idled in the 40’s and maxed around 85 degrees whilst running the torture tests for an afternoon, so for a few degrees more you can ensure that you get more constant performance from the setup.
Also worth noting is that we’ve had our CAD workstations up to around 4.5GHz and higher in a number of instances although in those instances we’re talking about a full water loop and a number of extra fans to maintain stability under that sort of workload, which wouldn’t be ideal for users working in close proximity to a highly sensitive mic.
Ok, so first up the CPUz information for the chip at hand, as well it’s Geelbench results.
More importantly for this comparison is the Geekbench 4 results and to be frank it’s all pretty much where we’d expect it to be in this one.
The single core score is down compared with the 7900X, but we’d expect this given the 4.2GHz clocking of the chip against the 4.3GHz 7900X. The multicore score is similarly up, but then we have a few more cores so all in all pretty much as expected here.
On with the DAWBench tests and again, no real surprises here. I’d peg it at being around an average of 10% or so increase over the 7900X which given we’re just stacking more cores on the same chip design really shouldn’t surprise us at all. It’s a solid solution and certainly the highest benching we’ve seen so far barring the models due to land above it. Bang per buck it’s £1020 price tag when compared to the £900 for the 10 core edition it seems to perform well on the Intel price curve and it looks like the wider market situation has curbed some of the price points we might have otherwise seen these chips hit.
And that’s the crux of it right now. Depending on your application and needs the are solutions from both sides that might fit you well. I’m not going to delve too far into discussing the value of the offerings that are currently available as prices do seem to be in flux to some degree with this generation. Initially, when it was listed we were discussing an estimated price of £100 per core and now we seem to be around £90 per core at the time of writing which seems to be a positive result for anyone wishing to pick one up.
Of course, the benchmarks should always be kept in mind along with that current pricing and it remains great to see continued healthy competition and I suspect with the further chips still to come this year, we may still see some additional movement before the market truly starts to settle after what really has been a release packed 12 months.