Tag Archives: Testing

Ryzen Generation 2: 2600X & 2700X On The Testbench

Looking back over the rather hectic first few months of 2018 in the PC industry, it’s clear that a lot has changed since the last CPU benchmark session late last year. In the space of 6 months, we’ve seen security concerns and the resulting software patches swing windows performance back and forth as they’ve arrived with us thick and fast. I’ve largely been trying to wait it out and see how the dust settles in the interim, but with the release of new hardware, it’s time to get back into it.

My last bench was based on a build of windows frozen in late 2016 and associated drivers have gone through a number of revisions during the time since, so with the launch of Ryzen 2 it’s very much the time for an all-new software bench to be set up.

Cubase has moved from 8.0 to 9.5 and Reaper too has advanced a number of builds to 5.79 at the point of testing being initiated. This time around we also see the introduction of the newer SGA build of the DSP test, replacing the older DAWBench DSP test and the latest build of the DAWBench Vi test too.

Before getting underway please note that the new results are in no way comparable to the older charts, other than looking at the rough performance curve differences between certain chips which do appear to be in line with prior results. They are certainly not directly value comparable with all the bench changes that have taken place and it’s always key to keep the playing field as level as possible when doing these comparisons.

This time around I’ve tried to run each chip at its turbo frequency across all cores once again. Moden chips will tend to be rated with both a stock clock and a turbo clock, although what isn’t always clear is that the max turbo rating is often only over 1 or 2 cores by default.

Historically it’s been relatively easy to run most CPUs with those cores being pushed and locked off at the turbo max. However, in the event of a platform being pushed too hard, then this isn’t always viable. For instance, I saw this in testing some of the higher end i9’s, where I would choose to all core at 4.1GHz, rather than leave it at stock and let it 2 core to 4.2GHz with a far lower average leaving me open to possible audio interruptions due to clocking.

It’s also the case here with the 2700X where the overclock would hang the machine if trying to push everything to the 4.2GHz rated turbo speed. Instead, I tried to clock it up both manually and using the AMD tool, both of which topped out around 4.1GHz. After speaking to my gaming team and realising this is fairly common (a number of other reviews have picked up on it as well) I ended up using the utility to set everything up with the slightly lower all core turbo at 4.1GHz and testing there.

DAWBench DSP SGA 1566 Test - Q2 2018
DAWBench DSP SGA Test – Q2 2018 (Click To Expand)

So first up is the newest variation of the classic DAWBench DSP test, now making use of the SGA1566 plugin from Shattered Glass Audio to apply CPU load until the project starts to crackle and break up. 

The 2700X here slots in behind the 8700K which leads by just short of 20% extra overhead at the tightest buffer setting, and both chips look to scale upwards in a similar pattern as you increase the buffer setting. The 8700K seems to be the most suitable comparison here as the price point (at time of writing in the UK) is around £30 more or about 10% more than the cost of the 2700X at launch.

The story of the performance curve scaling looks to repeat when we come to examine the 2600X and by comparison the 8600K from Intel. However, this time around the results are reversed with the Intel chip lagging behind the AMD model by about 5% across the buffer settings whilst the AMD costs around £25 less which makes it roughly 12% cheaper at launch.

So a strong showing for the DSP test, where we’re mostly throwing a load of small VST plugs at the CPU. The other test we run here is the DAWBench Vi test, based on stacking up Kontakt instances which allows us to test the memory response through sample loading along the CPU as we see with the DSP test.

With the Gen1 Ryzens, we saw them perform worse here overall, we suspect down to the memory response and performance. AMD saw similar performance issues across various segments with certain core software ranging from gaming to video processing and the was a lot of noise and multiple attempts to improve this over the life cycle of the chip. One suggestion we saw pay off to some extent in other segments (once again, video and gaming made notable gains) was to move over to using faster memory speeds.

We didn’t see any improvement here for audio applications, although in this instance all testing (both Intel and AMD) has been carried out with 3200MHz RAM, in the interest of trying to maximize the performance where we can as well as keeping things level in that regard.

The headline figure this time around suggests a rough 10% improvement to the IPC (instruction per clock) scores, which of course is promising, although notably, this is where AMD was lagging behind Intel even after bringing Ryzen to the market. In the interim we’ve seen the Coffee Lake launch, which also improved Intel’s IPC scores meaning that whilst AMD has been catching up rapidly of late, Intel does seem to remain intent on clawing back the lead on each successive launch.

DawBench Vi Test - Q2 2018 (Click To Expand)
DawBench Vi – Q2 2018 (Click To Expand)

So looking it over this time, both the 2700X and 2600X look to fall behind their Intel comparable chips.  The 2600X is roughly 20% lower than the 8600K this time although it’s moving up to the 2700X that proves more interesting, if only because it helps to outline what’s occurred between the two generation releases.

The older 1800X stood up well against the old 7700K edition at its launch, and indeed that extra 10% IPC boost we see this time may well have given it a solid lead over the Intel, if not for the Coffee Lake release in the interim in the shape of 8700K which pulls off a convincing lead at this price point currently. Indeed, not only does the 8700K show gains over the previous 7700K chip, but it also overtakes the more expensive although admittedly older, entry-level 6 core 7800X on the Intel’s own enthusiast platform. 

The 2700X is comparable to the 7800X at a far keener price point, although as noted the 7800X more or at least exists as a bit of an oddity by this point, even within it’s own range, so whilst this might have been a more impressive comparison 12 months ago, now it feels like they may have landed it just a few months too late to make serious waves.

Speaking from an audio point of view, the chips are good, but not exactly groundbreaking. If you also work in another segment where the AMD’s are known to have strengths, then the good news here is that they offer reasonable bang per buck for audio and hold their ground well as far as giving you performance at those price points.

But once again, they don’t appear to be breaking any performance to cost records overall at least for the audio market. They’ve got solid gains, but then again so has Intel last time around and this is often how it goes with CPU’s when we have the firms battling it out for market share. Not that this is a bad thing, certainly it benefits the end user, whichever your choice of platform.

As a closing note, I saw in my early generation 1 testing a number of interfaces fail to enumerate on the AMD boards. I reported this to a few manufacturers and interestingly the device that first showed up problems on the X370 boards the first time around (in this instance a UAD Twin USB), is behaving superbly on the X470 platform.

Whilst this is a sample size of approximately “1” unit in a range, it does point towards a reconsidering of the USB subsystem this time around, which can only be a positive. Anyone who was perhaps considering this the Ryzen 1 platform, but found themselves out of luck with interface compatibility, might well fare far better this time around. Obviously, if the were problems known before then please do check with the manufacturers your considering for the latest compatibility notes in each instance.

Looking forward there is a rumoured 2800X flagship Ryzen which is already well discussed but as yet no release date on the horizon. The has been already been discussion, rumours and even some testing and validation leaks out in the wild that suggest that Intel might be sitting on an 8 core Coffee Lake. It would certainly make sense for them to be keeping such a chip in the wings waiting on them seeing the public reaction to these new AMD chips. Similarly, it might turn out that the 2800X will be held back as an answer for those rumoured Intel models should they suddenly appear on the market in the near future.

To wrap it up, essentially we’re in peak rumour season and I’ve no doubt we’ll continue to see a pattern of one-upmanship for the foreseeable future which continues to be a very positive thing indeed. If you need to buy a system today, then the charts should help guide you, although if you’re not in rush right now, I’m sure the will be some interesting hardware to also consider coming over the year ahead.

Previous CPU Benchmarking Coverage
3XS Systems @ Scan

The Impact Of Meltdown And Spectre For Audio Workstations

No doubt, the hottest topic in I.T. at the start of 2018 continues to be the CPU security risks that have come to light as 2017 came to a close.

Otherwise known as “Spectre” and “Meltdown ” an exhaustive amount of information has been written already in regards to how these design choices can lead to data being accessed within the computer by processes or other code that shouldn’t have access to it, potentially leaving the system open to attacks by malicious code run on the computer.

For instance one of the more concerning attack vectors in this scenario are servers hosting multiple customers on one system, and in a world where it might be common to hear about many virtual machines being used in a hosting environment in order to keep them separate and secure, allowing this type of code to access the data with poor security in place opens up the possibility of transaction details, passwords and other customer records in a manner that has obviously raised a large amount of concern in both security professionals and end consumers alike.

Off the back of this have emerged the patches and updates required to solve the issue, and along with those are some rather alarming headline figures regarding performance levels potentially taking a hit, with claims of anywhere up to 30% overhead being eaten away by certain types of workload.

As there are many great resources already explaining this including this one here that can help outline what is going on, I’m not going to delve too much into the background of the issues, rather focus on the results of the updates being applied. 

We’re going to look at both the Microsoft patch at a software level and test the BIOS update released to support it. There are two issues here with Meltdown and Spectre and there happens to be two variants of Spectre, one of which can be handled at the software level, with the other requiring the microcode update applied via a BIOS update.

Microsoft has, of course, released their own advisory notes which are certainly worth a review too and available here.  At this time it is advised that Meltdown and all Spectre variants can both affect Intel CPU’s and some ARM compatible mobile chips, whereas AMD is only affected by the Spectre variants with AMD themselves having just issued an updated advisement at the time of writing which can be found here. This is also largely an  OS platform agnostic issue with Microsoft, Apple, Linux and even mobile OS’s all having the potential to be affected and over the last few weeks rapidly deploying updates and patches to their users.

At this point, I’m just going to quote a portion taken from the Microsoft link above verbatim, as it outlines the performance concerns we’re going to look at today. Keep in mind that in the text below “variant 1 & 2” are both referring to the Spectre issues, whereas Meltdown is referred to as simply “variant 3”.

One of the questions for all these fixes is the impact they could have on the performance of both PCs and servers. It is important to note that many of the benchmarks published so far do not include both OS and silicon updates. We’re performing our own sets of benchmarks and will publish them when complete, but I also want to note that we are simultaneously working on further refining our work to tune performance. In general, our experience is that Variant 1 and Variant 3 mitigations have minimal performance impact, while Variant 2 remediation, including OS and microcode, has a performance impact.

Here is the summary of what we have found so far:

  • With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we don’t expect most users to notice a change because these percentages are reflected in milliseconds.
  • With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
  • With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
  • Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance. This is why you want to be careful to evaluate the risk of untrusted code for each Windows Server instance, and balance the security versus performance tradeoff for your environment.

For context, on newer CPUs such as on Skylake and beyond, Intel has refined the instructions used to disable branch speculation to be more specific to indirect branches, reducing the overall performance penalty of the Spectre mitigation. Older versions of Windows have a larger performance impact because Windows 7 and Windows 8 have more user-kernel transitions because of legacy design decisions, such as all font rendering taking place in the kernel. We will publish data on benchmark performance in the weeks ahead.

The testing outlined here today is based on current hardware and Windows 10. Specifically, the board is an Asus Z370 Prime A, running on a Samsung PM961 M.2. drive, with a secondary small PNY SSD attached. The CPU is an i5 8600 and the is 16GB of memory in the system.

Software wise updates for windows were completed right up to the 01/01/18 point and the patch from Microsoft to address this was released on 03/01/18 and is named “KB4056892”. I start the testing with the 605 BIOS from late 2017 and move through to the 606 BIOS designed to address the microcode update specified by Intel. 

Early reports have suggested a hit to the drive subsystem, so at each stage, I’m going to test this and of course, I’ll be monitoring the CPU performance as each step is applied. Also keep in mind that as outlined in the Microsoft advisory above, different generations of hardware and solutions from different suppliers will be affected differently, but as Intel is suggested as being the hardest hit by these problems, it makes sense to examine a current generation to start with.

The Testing 

Going into this, I was hopeful that we wouldn’t be expecting to see a whole load of processing power lost simply due to the already public explanations of how the flaw could potentially affect the system didn’t read as being one that should majorly impact the way an audio system handles itself.

Largely it’s played out as expected, as when you’re working away within your sequencer the ASIO driver is there doing its best to keep itself as a priority and generally, if the system is tuned to work well for music, the shouldn’t be a million programs in the background that should be affected by this and causing the update to steal processing time. So, given we’re not running the sort of a server related workloads that I would expect to cause too much of an upset here, I was fairly confident that the impact shouldn’t be as bad as some suggestions had made out and largely on the processing side it plays out like that. 

However, prior to starting the testing, it was reported that storage subsystems were taking a hit due to these patches and that of course demanded that we take a look at it along the way too. Starting with the worst news first, those previous reports are very much on the ball. I had two drives connected and below we see the first set of results taken from a Samsung M.2. SM961 model drive.

M2 Test After Applying Meltdown Changes
Click to expand – Results From Left To Right – 1, Baseline Result. 2, After Microsoft Update. 3, With update and BIOS patch applied.

To help give you a little more background on what’s being tested here, each test should be as follows:

  • Seq Q32T1 – sequential read/ write with multiple threads and queues
  • 4K Q32T1 – random read/ write with multiple threads and queues
  • Seq – sequential read/ write with a single queue and thread
  • 4K – random read/ write with a single queue and thread.

The is no doubt a performance hit here to the smaller 4k files which are amplified as more threads are taken up to handle the workload in the 4K Q32T1 test.  On the flip side of this is that the sequential handling seems to either escape relatively unscathed and in some instances even improved to some degree, so there is some trade-off here depending on the workload it’s handling.

The gains did confuse me at first and whilst first sifting through the data I started to wonder if perhaps given we were running off the OS drive, and perhaps other services had skewed it slightly. Thankfully, I also had a project SDD hooked up to the system as well, so we can compare a second data point against the first.

SSD meltdown testing
Click to expand, Results left to right. 1, Baseline. 2, After Microsoft Patch. 3, After BIOS update.

The 4k results still show a decrease and the sequential once again hold fairly steady with a few read gains, so it looks like some rebalancing to the performance levels has taken place here too, whether intentional or not.

The DAWBench testing, on the other hand with the DSP test, ends up with a more positive result.  This time around I’ve pulled out the newer SGA based DSP test, as well as the Kontakt based DAWBench VI test and both were run within Reaper. 

SGA test for Meltdown
Click to expand

The result of the DSP test which concentrates on loading the CPU up shows little difference that doesn’t fall within the margin of error & variance.  It should also be noted that the CPU was running at 99% CPU load when it topped out, so we don’t appear to be losing any overhead here in that regard.

DAWBench VI test for Meltdown
Click to expand.

With the Kontakt based DAWBench VI test, we’re seeing anything between 5% and 8% overhead reduction depending on the ASIO buffer, with the tightest 64 buffer suffering after each update whereas the looser settings coped better with the software update before taking a small hit when we get up to the 256 buffer.

The Verdict

Ultimately the concern here is how will it impact you in real terms?

The minor loss of overhead on the second testing set was from a Kontakt heavy project and the outcome from the drive tests would suggest that anyone with sample library that has a heavy reliance on disk streaming may wish to be careful here with any projects that are already on the edge prior to the update being applied.

I also timed that project being loaded across all 3 states of the update process as I went with the baseline time frame to open the project being 20 seconds. After the software update, we didn’t see a change in this time span, with the project still taking 20 seconds to open. However, the BIOS update once applied along with the OS update added 2 seconds to this giving us roughly a 10% increase in the project load time.

So at this time, whilst any performance is certainly not welcome, we’re not seeing quite the huge skew in the performance stakes that has been touted thankfully, and certainly well short of the 30% figure that was being suggested initially for the CPU hit.

There have been suggestions by Microsoft that older generations might be more severely affected and from the description of how it affects servers I suspect that we may well see that 30% figure and even higher under certain workloads in server environments, but I suspect that it’ll be more centered around the database or virtual machine server workstation segments than the creative workstation user base.

Outside of our small corner of the world, TechSpot has been running a series of tests since the news broke and it’s interesting to see other software outside of the audio workstation environment seems to be largely behaving the same for a lot of end users, as are the storage setups that they have tested. If you’d like to read through those you can do so here.

Looking Forward

The issue was discovered over the course of 2017 back but largely kept under wraps so it couldn’t be exploited at the time. However, the existence of the problem leaked before the NDA was lifted and feels like a few solutions that have been pushed out in the days since may have been a little rushed in order to stop anyone more unethical capitalizing upon it. 

As such, I would expect performance to bounce around over the next few months as they test, tweak and release new drivers, firmware and BIOS solutions. The concern right now for firms is ensuring that systems around the world are secure and I would expect there to be a period of optimization to follow once they have removed the risk of malware or worse impacting the end user.

Thankfully, it’s possible to remove the patch after you apply it, so in a worst case scenario you can still revert back and block it should it have a more adverse effect upon your system, although it will then leave you open to possible attacks. Of course, leaving the machine offline will obviously protect you, but then that can be a hard thing to do in a modern studio where software maintenance and remote collaboration are both almost daily requirements for many users. 

However you choose to proceed, will no doubt be system and situation specific and I suspect as updates appear the best practice for your system may change over the coming months. Certainly, the best advice I can offer here is to keep your eye on how this develops, make the choices that keep you secure without hampering your workflow and review the situation going forward to see if further optimizations can help restore the situation to pre-patch levels as a resolve for the problem is worked upon by both the hardware and software providers. 

The Intel i9 7920X On The Bench

Back in June this year we took a look at the first i9 CPU model with the launch of the i9 7900X. Intel has since followed on from that with the rest of the i9 chips receiving a paper launch back in late August and with the promise of those CPU’s making it into the publics hands shortly afterward. Since then we’ve  seen the first stock start to arrive with us here in Scan and we’ve now had a chance to sit down and test the first of this extended i9 range in the shape of the i9 7920X.

The CPU itself is 12 cores along with hyper-threading, offering us a total of 24 logical cores to play with. The base clock of the chip is 2.9GHz and a max turbo frequency of 4.30GHz with a reported 140W TDP which is much in line with the rest of the chips below it in the enthusiast range.  Running at that base clock speed the chip is 400MHz slower per core than the 10 core edition 7900X. So if you add up all the available cores running at those clock speeds (12 X 2900 vs 10 X 3300) and compare the two chips on paper, then the looks to be less than 2GHz total available overhead separating them but still in the 7920X’s favor. 

So looking at it that way, why would you pay the premium £200 for the 12 core? Well interestingly both CPU’s claim to be able to turbo to the same max clock rating of 4.3GHz, although it should be noted that turbo is designed to factor in power usage and heat generation too, so if your cooling isn’t up to the job then you shouldn’t expect it to be hitting such heady heights constantly and whilst I’m concerned that I may be sounding like a broken record by this point, as with all the high-end CPU releases this year you should be taking care with your cooling selection in order to ensure you get the maximum amount of performance from your chip.

Of course, the last thing we want to see is the power states throttling the chip in use and hampering our testing, so as always we’ve ensured decent cooling but aimed to keep the noise levels reasonable where we can. Normally we’d look to tweak it up to max turbo and lock it off, whilst keeping those temperatures in check and ensuring the system will be able to deliver a constant performance return for your needs.

However, in this case, I’ve not taken it quite all the way to the turbo max, choosing to keep it held back slightly at 4.2GHz across all cores. I was finding that the CPU would only ever bounce of 4.3GHz when left to work under its own optimized settings and on the sort of air cooling we tend to favour it wouldn’t quite maintain the 4.3GHz that was achieved with the 7900X in the last round of testing without occasionally throttling back. It will, however, do it on an AIO water loop cooler, although you’re adding another higher speed fan in that scenario and I didn’t feel the tradeoff was worth it personally, but certainly worth considering for anyone lucky to have a separate machine and control room where a bit more noise would go unnoticed.

Just as a note at this point, if you run it at stock and let it work its own turbo settings then you can expect an idle temperature around 40 degrees and under heavy load it still should be keeping it under 80 degrees on average which is acceptable and certainly better than we suspected around the time of the 7900X launch. However, I was seeing the P-states raising and dropping the core clock speeds in order to keep its power usage down and upon running Geekbench and comparing the results that my 4.2GHz on all cores setting gave us an additional 2000 points (around 7% increase) over the turbo to 4.3GHz default setting found in the stock configuration. My own temps idled in the 40’s and maxed around 85 degrees whilst running the torture tests for an afternoon, so for a few degrees more you can ensure that you get more constant performance from the setup.

Also worth noting is that we’ve had our CAD workstations up to around 4.5GHz and higher in a number of instances although in those instances we’re talking about a  full water loop and a number of extra fans to maintain stability under that sort of workload, which wouldn’t be ideal for users working in close proximity to a highly sensitive mic. 

Ok, so first up the CPUz information for the chip at hand, as well it’s Geelbench results.


7920X CPUz
CPUz 42Ghz bench7920X Geekbench 4

More importantly for this comparison is the Geekbench 4 results and to be frank it’s all pretty much where we’d expect it to be in this one.

7920X geekbench 4 Chart
Click to expand.

The single core score is down compared with the 7900X, but we’d expect this given the 4.2GHz clocking of the chip against the 4.3GHz 7900X. The multicore score is similarly up, but then we have a few more cores so all in all pretty much as expected here.

Dawbench DSP 7920X
Click to Expand
Dawbench 6
Click to Expand

On with the DAWBench tests and again, no real surprises here. I’d peg it at being around an average of 10% or so increase over the 7900X which given we’re just stacking more cores on the same chip design really shouldn’t surprise us at all. It’s a solid solution and certainly the highest benching we’ve seen so far barring the models due to land above it. Bang per buck it’s £1020 price tag when compared to the £900 for the 10 core edition it seems to perform well on the Intel price curve and it looks like the wider market situation has curbed some of the price points we might have otherwise seen these chips hit. 

And that’s the crux of it right now. Depending on your application and needs the are solutions from both sides that might fit you well. I’m not going to delve too far into discussing the value of the offerings that are currently available as prices do seem to be in flux to some degree with this generation. Initially, when it was listed we were discussing an estimated price of £100 per core and now we seem to be around £90 per core at the time of writing which seems to be a positive result for anyone wishing to pick one up.

Of course, the benchmarks should always be kept in mind along with that current pricing and it remains great to see continued healthy competition and I suspect with the further chips still to come this year, we may still see some additional movement before the market truly starts to settle after what really has been a release packed 12 months.

The 3XS Systems Selection @ Scan

Intel launches the Skylake chipset and we DAWbench it in the studio.

Intel’s latest chipset has recently launched and the Z170 series or Skylake as it is informally known, is a refinement of the earlier Broadwell range launched last year. The Broadwells were most notable for bringing 14nm processors to the market, althrough these CPUs tended to be lower powered solutions and so didn’t register all that much on the enthusiasts radar

Of couse the is nothing wrong with lower powered solutions and the lower heat is always great especially if you want a low noise system to work with, but the for those who also required large amounts of performance the Broadwells were simply not all that attractive, with many of us who were simply looking for the very best performance at a given price point, choosing to stick with the Haswell platform from the generation before, as it simply offered up the best bang per buck solution.

So with that in mind, we’ll take a look at overall performance using the trusty DAWBench test and see how it all stands, along with consideration being given to both upgrades and new machine senarios.

We’ve discussed DAWBench a number of times over the years with the last time being our start of year round up. As this is a quick test to see how the new chips hold up, if you’re not already up to speed, may I suggest checking out the last time we visited this and it should give you a quick grounding before we dive in.

You can find that testing round here.

Fully caught up?

Ok. Then lets begin.

Give the image below a click and you can see our test results.

August 1015 DPC Chart

So this time around we’re testing 2 CPU’s with those being the i5 6600K and the i7 6700K. This time we’ve benched them in two different states where the lower clock speed is CPU at stock clocks with the turbo locked on at 100% of the advertised turbo clock speed and the second test shows the CPU in question being overclocked up to 4.4GHz setting that we supply our systems at.

When the overclock option is selected it should allow us to see what sort of difference the overclocking process can make, which in turn shouldl also help measure us measure the new chips against some of the older CPU scores where we’ve also worked with similar overclock figure. Also be aware we keep our overclocks on workstations rather minimal choosing to get the best out of chip, rather than push it to its limits.

This means that we don’t ramp up the voltages and generate the heat that comes with higher overclocks often seen on the gaming systems, which also have fast fans and noisey cooling in order to compensate, which of course would be completely unacceptable in a recording studio environment.

Starting with the i5, well it pretty much returned the performance levels matching the older 4790K chip, with a small performance boost showing up at the very tightest buffer settings, which admittedly is always a very welcome bonus. As a new replacement for the older chip, well it keeps the value the same whilst giving you access to the other benefits of the platform, so as a new build these should all prove most welcome additions, although as an upgrade from an older i5 it’s going to be harder to justify.

Of course if you are looking to upgrade in the midrange then the i7 option will possibly make more sense anyhow and this is where it gets a bit more interesting. The good news here is that we see both a slight power saving over the older 4790K with roughly 10% more performance increase clock for clock over that older 4790K, which was best performance crown around the midrange until the launch of these new chips.

As I’ve already touched upon briefly, Skylakes main selling point has been the other features it introduces to the mainstream. The boards we’ve seen are offering more M.2 slots which in themselves offer transfer speeds in excess of 4 times those speeds seen on current SSD’s. Some boards are also offering the ability to hybrid RAID them PCIe based add in cards too, meaning that if your tempted then this platform will offer up some truely amazing data transfer speeds that could transform your time in the studio if you work with large sample libaries and templates like some VSL users.

Additionally USB 3.1 and USB type C are now native to the Z170 chipset and this standard is only going to to grow over coming years, so early adoptors, this is your platform. It’s also the first time we’ve seen DDR4 in a mainstream setup and for those working with video editing on the side, the extra bandwidth will prove beneficial to some extent. AVX 2 instruction improvements to CPU’s may also prove beneficial to multimedia applications in the future, although these tend to impact CAD & Video software mostly, some plug in manufacturers or even DAW coders may eventually chose to leverage these instruction set improvements in the future.

All this as far as building a new machine is concerned is great as any improvement for your money is always going to be a good thing. For those looking to upgrade older machines however, the small incremental improvements mean that anyone who currently owns a CPU from Ivybridge upwards is going to be hard pressed to get a justifiable upgrade by going for a more modern equivalent although the are certainly some improvements are there if your hand is forced into a new setup due to aging hardware reaching the end of its lifecycle.

For those users with more recent machines however that do require an upgrade path, the X99 platform offers a very attractive upgrade option right now, offering a solid bang per buck for those needing more performance from their system. Also worth noting is that with the extra cost caused by the Z170 platform moving to DDR4 and indeed DDR4’s ever decreasing price points, the enthusiasts X99 setups are now starting to reach price points less than a hundred pounds more than the mid-range brethren.

This all means that the X99 may offer many users more value for money overall long term and should certainly be considered by anyone considering a new studio solution at this time, if they are looking to get the longest lifespan they can from a new machine setup.

All DAWBench Testing Results

Scan 3XS Audio Systems

Audio Computer System Benchmarking

Every year we find with computer systems as with so many other products it seems that the is always something bigger, better and faster becoming available. The question is how do we validate those claims and work out which solution will fit which user whilst offering the best performance at any given price point?

Here in Scan we use a number of different tests and where gamers concern themselves with performance indicators like 3DMark and video people concentrate on Cinebench for audio the stand out test used by retailers and reviewers alike is DAWBench for audio computer system benchmarking. DAWBench’s working methodology is a rather large subject in itself and something we will be covering in later articles in much depth but here we can give a quick overview covering how it relates to audio computer system performance.

The DAWBench tests revolve around running as many instances of a given effect or audio source as possible until the CPU overloads and audio corruption is generated in the signal path. The most common variation of this test is the RXC compressor test which has been in use now for a number of years and has plenty of results generated overtime making it ideal for us to look at how performance has grown from generation to generation of audio computer systems.

The test itself is fairly simple to carry out and can be run in a number of popular sequencers including (but not limited to) Cubase, Reaper, Sonar and Protools. The template for the test can be downloaded from the DAWBench website which consists of 4 tracks of audio parts and 40 channels of sine waves. On each of these sine wave parts 8 RXC compressors are included already set up but not yet activated and it is these you switch on one at a time in order to put the system under more and more load. Whilst testing the sine wave channels that you are working with are turned down but the accumulated compressors continue to up the load on the system and you monitor the situation by means of the looping audio tracks playing through your speakers. As you reach the point where the processing ability of the system reaches its maximum handling ability the audio you hear will start to distort and break up and it’s at this point where you have to turn off a few compressor instances taking it back to the point where the audio is clean and unbroken, which when you have the audio this point you then make a note of the total number of RXC compressor instances achieved and that is your score at the buffer setting in question.

A quick real world explanation of buffer latency for those not familiar with it is this. A low buffer setting means that your input devices can communicate quickly with the CPU inside of the audio computer system and the data can be processed quickly and for real time interaction this is crucial. Something you can try yourself is setting the buffer latency in your sound card control panel firstly to it’s lowest figure normally around the 32/48/64 level and playing a note on your midi controller which you will find is very responsive at these settings. If however you raise the latency settings up to around the 1024 level or higher and now trigger your midi controller you’ll notice a definite amount of lag between the key press and the sound coming out of the speakers.

So why would we want to run an interface at 1024 or higher settings?

As you bring down the buffer figure to improve response times your placing more and more load upon the CPU as a smaller buffer is forced to talk to the CPU more often which means more wasted cycles as it switches from other jobs to accommodate the data being processed. Whilst an artist performing or recording in real time will want the very lowest settings to enable the fastest fold back of audio to enable them to perform their best, a mix engineer may wish to run with these buffers set far higher to free up plenty more CPU headroom to enable high quality inline processing VSTi’s the performance to carry out their tasks without overloading the processor which as we’ve seen before would cause poor results in the final mixdown.

Too keep the playing field level the results below have been tested with Windows 7 64bit and in all these tests we have used a firewire M-audio Profire 1814 interface to ensure the results are not skewed by using various interfaces with different driver solutions. The are better cards that will give better results at super low latencies, with the RME range for instance going down to buffer settings of 48 on the USB/Firewire solutions and even 32 on the internal models. The M-Audio unit however has great drivers for the price point and we feel that giving fair figures using an interface at an accessible pricepoint gives a fair reflection of performance available to the average user and those who are in the position to invest in more premium units should find themselves with additional performance gains. We will be comparing various interfaces in the future here on the blog and the are benchmarks being produced in the DAWBench forums which also good further reading for those of you looking for new card solutions in the meantime.

So what does the chart above show us?

The are a number of audio computer systems being tested on there from over the last few years and it shows the continued growth of performance as newer hardware has been released. The stock i7 2600 proved to be a great performer when stacked up against the previous high end Intel systems even coming close to the hexcore flagship chips from that generation. What we also see is that once you take a 2600k and overclock it as we do here the performance available is greater than the 990x for a great deal less cost wise although it has to be noted that the X58 platform has more available bandwidth which can help increase performance in some real world instances where the user is working with vast sample libraries, the results we see here are a good indicator of how the machines will run for a more typical user.

Also worth noting in the performance results above is the i5 2500 result as we use it in our entry level value systems currently. The performance is roughly half of the overclocked 2600k system and in real world terms the cost of the system is roughly half as well meaning that whilst neither unit offers better value for money than the other in the cost vs performance stakes, in instances where your recording requirements are not quite as great the value spec still offers plenty of power to get you going and achieve completion on smaller projects even if it doesn’t offer the additional cooling and silencing features we have as standard on the high end solutions. It’s also worth noting that the i5 2500 scores close to the last generation i7 930 which shows how much performance improved between the last generation and the current one.

Our high end laptop solution in all but the very lowest latency situations also proves to be pretty much on par with the last X58 based i7 930 processor which itself still offers enough power to the user to get the job done in all but the most demanding situations which means that the age of the full desktop replacement laptop is very much with us making it as easy to edit, mix and produce fully formed mixes on the road as it is to perform every night with the very same units.

Hopefully that helps explain how we rate audio computer systems in house for performance testing and will help you decide upon your own next system. We run these tests on each new range we release so keep an eye out for further articles showing testing results as new hardware reaches the market.

Dawbench Homepage