Looking back over the rather hectic first few months of 2018 in the PC industry, it’s clear that a lot has changed since the last CPU benchmark session late last year. In the space of 6 months, we’ve seen security concerns and the resulting software patches swing windows performance back and forth as they’ve arrived with us thick and fast. I’ve largely been trying to wait it out and see how the dust settles in the interim, but with the release of new hardware, it’s time to get back into it.
My last bench was based on a build of windows frozen in late 2016 and associated drivers have gone through a number of revisions during the time since, so with the launch of Ryzen 2 it’s very much the time for an all-new software bench to be set up.
Cubase has moved from 8.0 to 9.5 and Reaper too has advanced a number of builds to 5.79 at the point of testing being initiated. This time around we also see the introduction of the newer SGA build of the DSP test, replacing the older DAWBench DSP test and the latest build of the DAWBench Vi test too.
Before getting underway please note that the new results are in no way comparable to the older charts, other than looking at the rough performance curve differences between certain chips which do appear to be in line with prior results. They are certainly not directly value comparable with all the bench changes that have taken place and it’s always key to keep the playing field as level as possible when doing these comparisons.
This time around I’ve tried to run each chip at its turbo frequency across all cores once again. Modern chips will tend to be rated with both a stock clock and a turbo clock, although what isn’t always clear is that the max turbo rating is often only over 1 or 2 cores by default.
Historically it’s been relatively easy to run most CPUs with those cores being pushed and locked off at the turbo max. However, in the event of a platform being pushed too hard, then this isn’t always viable. For instance, I saw this in testing some of the higher end i9’s, where I would choose to all core at 4.1GHz, rather than leave it at stock and let it 2 core to 4.2GHz with a far lower average leaving me open to possible audio interruptions due to clocking.
It’s also the case here with the 2700X where the overclock would hang the machine if trying to push everything to the 4.2GHz rated turbo speed. Instead, I tried to clock it up both manually and using the AMD tool, both of which topped out around 4.1GHz. After speaking to my gaming team and realising this is fairly common (a number of other reviews have picked up on it as well) I ended up using the utility to set everything up with the slightly lower all core turbo at 4.1GHz and testing there.
The 2700X here slots in behind the 8700K which leads by just short of 20% extra overhead at the tightest buffer setting, and both chips look to scale upwards in a similar pattern as you increase the buffer setting. The 8700K seems to be the most suitable comparison here as the price point (at time of writing in the UK) is around £30 more or about 10% more than the cost of the 2700X at launch.
The story of the performance curve scaling looks to repeat when we come to examine the 2600X and by comparison the 8600K from Intel. However, this time around the results are reversed with the Intel chip lagging behind the AMD model by about 5% across the buffer settings whilst the AMD costs around £25 less which makes it roughly 12% cheaper at launch.
So a strong showing for the DSP test, where we’re mostly throwing a load of small VST plugs at the CPU. The other test we run here is the DAWBench Vi test, based on stacking up Kontakt instances which allows us to test the memory response through sample loading along the CPU as we see with the DSP test.
With the Gen1 Ryzens, we saw them perform worse here overall, we suspect down to the memory response and performance. AMD saw similar performance issues across various segments with certain core software ranging from gaming to video processing and the was a lot of noise and multiple attempts to improve this over the life cycle of the chip. One suggestion we saw pay off to some extent in other segments (once again, video and gaming made notable gains) was to move over to using faster memory speeds.
We didn’t see any improvement here for audio applications, although in this instance all testing (both Intel and AMD) has been carried out with 3200MHz RAM, in the interest of trying to maximize the performance where we can as well as keeping things level in that regard.
The headline figure this time around suggests a rough 10% improvement to the IPC (instruction per clock) scores, which of course is promising, although notably, this is where AMD was lagging behind Intel even after bringing Ryzen to the market. In the interim we’ve seen the Coffee Lake launch, which also improved Intel’s IPC scores meaning that whilst AMD has been catching up rapidly of late, Intel does seem to remain intent on clawing back the lead on each successive launch.
So looking it over this time, both the 2700X and 2600X look to fall behind their Intel comparable chips. The 2600X is roughly 20% lower than the 8600K this time although it’s moving up to the 2700X that proves more interesting, if only because it helps to outline what’s occurred between the two generation releases.
The older 1800X stood up well against the old 7700K edition at its launch, and indeed that extra 10% IPC boost we see this time may well have given it a solid lead over the Intel, if not for the Coffee Lake release in the interim in the shape of 8700K which pulls off a convincing lead at this price point currently. Indeed, not only does the 8700K show gains over the previous 7700K chip, but it also overtakes the more expensive although admittedly older, entry-level 6 core 7800X on the Intel’s own enthusiast platform.
The 2700X is comparable to the 7800X at a far keener price point, although as noted the 7800X more or at least exists as a bit of an oddity by this point, even within it’s own range, so whilst this might have been a more impressive comparison 12 months ago, now it feels like they may have landed it just a few months too late to make serious waves.
Speaking from an audio point of view, the chips are good, but not exactly groundbreaking. If you also work in another segment where the AMD’s are known to have strengths, then the good news here is that they offer reasonable bang per buck for audio and hold their ground well as far as giving you performance at those price points.
But once again, they don’t appear to be breaking any performance to cost records overall at least for the audio market. They’ve got solid gains, but then again so has Intel last time around and this is often how it goes with CPU’s when we have the firms battling it out for market share. Not that this is a bad thing, certainly it benefits the end user, whichever your choice of platform.
As a closing note, I saw in my early generation 1 testing a number of interfaces fail to enumerate on the AMD boards. I reported this to a few manufacturers and interestingly the device that first showed up problems on the X370 boards the first time around (in this instance a UAD Twin USB), is behaving superbly on the X470 platform.
Whilst this is a sample size of approximately “1” unit in a range, it does point towards a reconsidering of the USB subsystem this time around, which can only be a positive. Anyone who was perhaps considering this the Ryzen 1 platform, but found themselves out of luck with interface compatibility, might well fare far better this time around. Obviously, if the were problems known before then please do check with the manufacturers your considering for the latest compatibility notes in each instance.
Looking forward there is a rumoured 2800X flagship Ryzen which is already well discussed but as yet no release date on the horizon. The has been already been discussion, rumours and even some testing and validation leaks out in the wild that suggest that Intel might be sitting on an 8 core Coffee Lake. It would certainly make sense for them to be keeping such a chip in the wings waiting on them seeing the public reaction to these new AMD chips. Similarly, it might turn out that the 2800X will be held back as an answer for those rumoured Intel models should they suddenly appear on the market in the near future.
To wrap it up, essentially we’re in peak rumour season and I’ve no doubt we’ll continue to see a pattern of one-upmanship for the foreseeable future which continues to be a very positive thing indeed. If you need to buy a system today, then the charts should help guide you, although if you’re not in rush right now, I’m sure the will be some interesting hardware to also consider coming over the year ahead.
No doubt, the hottest topic in I.T. at the start of 2018 continues to be the CPU security risks that have come to light as 2017 came to a close.
Otherwise known as “Spectre” and “Meltdown ” an exhaustive amount of information has been written already in regards to how these design choices can lead to data being accessed within the computer by processes or other code that shouldn’t have access to it, potentially leaving the system open to attacks by malicious code run on the computer.
For instance one of the more concerning attack vectors in this scenario are servers hosting multiple customers on one system, and in a world where it might be common to hear about many virtual machines being used in a hosting environment in order to keep them separate and secure, allowing this type of code to access the data with poor security in place opens up the possibility of transaction details, passwords and other customer records in a manner that has obviously raised a large amount of concern in both security professionals and end consumers alike.
Off the back of this have emerged the patches and updates required to solve the issue, and along with those are some rather alarming headline figures regarding performance levels potentially taking a hit, with claims of anywhere up to 30% overhead being eaten away by certain types of workload.
As there are many great resources already explaining this including this one here that can help outline what is going on, I’m not going to delve too much into the background of the issues, rather focus on the results of the updates being applied.
We’re going to look at both the Microsoft patch at a software level and test the BIOS update released to support it. There are two issues here with Meltdown and Spectre and there happens to be two variants of Spectre, one of which can be handled at the software level, with the other requiring the microcode update applied via a BIOS update.
Microsoft has, of course, released their own advisory notes which are certainly worth a review too and available here. At this time it is advised that Meltdown and all Spectre variants can both affect Intel CPU’s and some ARM compatible mobile chips, whereas AMD is only affected by the Spectre variants with AMD themselves having just issued an updated advisement at the time of writing which can be found here. This is also largely an OS platform agnostic issue with Microsoft, Apple, Linux and even mobile OS’s all having the potential to be affected and over the last few weeks rapidly deploying updates and patches to their users.
At this point, I’m just going to quote a portion taken from the Microsoft link above verbatim, as it outlines the performance concerns we’re going to look at today. Keep in mind that in the text below “variant 1 & 2” are both referring to the Spectre issues, whereas Meltdown is referred to as simply “variant 3”.
One of the questions for all these fixes is the impact they could have on the performance of both PCs and servers. It is important to note that many of the benchmarks published so far do not include both OS and silicon updates. We’re performing our own sets of benchmarks and will publish them when complete, but I also want to note that we are simultaneously working on further refining our work to tune performance. In general, our experience is that Variant 1 and Variant 3 mitigations have minimal performance impact, while Variant 2 remediation, including OS and microcode, has a performance impact.
Here is the summary of what we have found so far:
With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we don’t expect most users to notice a change because these percentages are reflected in milliseconds.
With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance. This is why you want to be careful to evaluate the risk of untrusted code for each Windows Server instance, and balance the security versus performance tradeoff for your environment.
For context, on newer CPUs such as on Skylake and beyond, Intel has refined the instructions used to disable branch speculation to be more specific to indirect branches, reducing the overall performance penalty of the Spectre mitigation. Older versions of Windows have a larger performance impact because Windows 7 and Windows 8 have more user-kernel transitions because of legacy design decisions, such as all font rendering taking place in the kernel. We will publish data on benchmark performance in the weeks ahead.
The testing outlined here today is based on current hardware and Windows 10. Specifically, the board is an Asus Z370 Prime A, running on a Samsung PM961 M.2. drive, with a secondary small PNY SSD attached. The CPU is an i5 8600 and the is 16GB of memory in the system.
Software wise updates for windows were completed right up to the 01/01/18 point and the patch from Microsoft to address this was released on 03/01/18 and is named “KB4056892”. I start the testing with the 605 BIOS from late 2017 and move through to the 606 BIOS designed to address the microcode update specified by Intel.
Early reports have suggested a hit to the drive subsystem, so at each stage, I’m going to test this and of course, I’ll be monitoring the CPU performance as each step is applied. Also keep in mind that as outlined in the Microsoft advisory above, different generations of hardware and solutions from different suppliers will be affected differently, but as Intel is suggested as being the hardest hit by these problems, it makes sense to examine a current generation to start with.
Going into this, I was hopeful that we wouldn’t be expecting to see a whole load of processing power lost simply due to the already public explanations of how the flaw could potentially affect the system didn’t read as being one that should majorly impact the way an audio system handles itself.
Largely it’s played out as expected, as when you’re working away within your sequencer the ASIO driver is there doing its best to keep itself as a priority and generally, if the system is tuned to work well for music, the shouldn’t be a million programs in the background that should be affected by this and causing the update to steal processing time. So, given we’re not running the sort of a server related workloads that I would expect to cause too much of an upset here, I was fairly confident that the impact shouldn’t be as bad as some suggestions had made out and largely on the processing side it plays out like that.
However, prior to starting the testing, it was reported that storage subsystems were taking a hit due to these patches and that of course demanded that we take a look at it along the way too. Starting with the worst news first, those previous reports are very much on the ball. I had two drives connected and below we see the first set of results taken from a Samsung M.2. SM961 model drive.
To help give you a little more background on what’s being tested here, each test should be as follows:
Seq Q32T1 – sequential read/ write with multiple threads and queues
4K Q32T1 – random read/ write with multiple threads and queues
Seq – sequential read/ write with a single queue and thread
4K – random read/ write with a single queue and thread.
The is no doubt a performance hit here to the smaller 4k files which are amplified as more threads are taken up to handle the workload in the 4K Q32T1 test. On the flip side of this is that the sequential handling seems to either escape relatively unscathed and in some instances even improved to some degree, so there is some trade-off here depending on the workload it’s handling.
The gains did confuse me at first and whilst first sifting through the data I started to wonder if perhaps given we were running off the OS drive, and perhaps other services had skewed it slightly. Thankfully, I also had a project SDD hooked up to the system as well, so we can compare a second data point against the first.
The 4k results still show a decrease and the sequential once again hold fairly steady with a few read gains, so it looks like some rebalancing to the performance levels has taken place here too, whether intentional or not.
The DAWBench testing, on the other hand with the DSP test, ends up with a more positive result. This time around I’ve pulled out the newer SGA based DSP test, as well as the Kontakt based DAWBench VI test and both were run within Reaper.
The result of the DSP test which concentrates on loading the CPU up shows little difference that doesn’t fall within the margin of error & variance. It should also be noted that the CPU was running at 99% CPU load when it topped out, so we don’t appear to be losing any overhead here in that regard.
With the Kontakt based DAWBench VI test, we’re seeing anything between 5% and 8% overhead reduction depending on the ASIO buffer, with the tightest 64 buffer suffering after each update whereas the looser settings coped better with the software update before taking a small hit when we get up to the 256 buffer.
Ultimately the concern here is how will it impact you in real terms?
The minor loss of overhead on the second testing set was from a Kontakt heavy project and the outcome from the drive tests would suggest that anyone with sample library that has a heavy reliance on disk streaming may wish to be careful here with any projects that are already on the edge prior to the update being applied.
I also timed that project being loaded across all 3 states of the update process as I went with the baseline time frame to open the project being 20 seconds. After the software update, we didn’t see a change in this time span, with the project still taking 20 seconds to open. However, the BIOS update once applied along with the OS update added 2 seconds to this giving us roughly a 10% increase in the project load time.
So at this time, whilst any performance is certainly not welcome, we’re not seeing quite the huge skew in the performance stakes that has been touted thankfully, and certainly well short of the 30% figure that was being suggested initially for the CPU hit.
There have been suggestions by Microsoft that older generations might be more severely affected and from the description of how it affects servers I suspect that we may well see that 30% figure and even higher under certain workloads in server environments, but I suspect that it’ll be more centered around the database or virtual machine server workstation segments than the creative workstation user base.
Outside of our small corner of the world, TechSpot has been running a series of tests since the news broke and it’s interesting to see other software outside of the audio workstation environment seems to be largely behaving the same for a lot of end users, as are the storage setups that they have tested. If you’d like to read through those you can do so here.
The issue was discovered over the course of 2017 back but largely kept under wraps so it couldn’t be exploited at the time. However, the existence of the problem leaked before the NDA was lifted and feels like a few solutions that have been pushed out in the days since may have been a little rushed in order to stop anyone more unethical capitalizing upon it.
As such, I would expect performance to bounce around over the next few months as they test, tweak and release new drivers, firmware and BIOS solutions. The concern right now for firms is ensuring that systems around the world are secure and I would expect there to be a period of optimization to follow once they have removed the risk of malware or worse impacting the end user.
Thankfully, it’s possible to remove the patch after you apply it, so in a worst case scenario you can still revert back and block it should it have a more adverse effect upon your system, although it will then leave you open to possible attacks. Of course, leaving the machine offline will obviously protect you, but then that can be a hard thing to do in a modern studio where software maintenance and remote collaboration are both almost daily requirements for many users.
However you choose to proceed, will no doubt be system and situation specific and I suspect as updates appear the best practice for your system may change over the coming months. Certainly, the best advice I can offer here is to keep your eye on how this develops, make the choices that keep you secure without hampering your workflow and review the situation going forward to see if further optimizations can help restore the situation to pre-patch levels as a resolve for the problem is worked upon by both the hardware and software providers.
Another month and another chip round up, with them still coming thick and fast, hitting the shelves at almost an unprecedented rate.
AMD’s Ryzen range arrived with us towards the end of Q1 this year and its impact upon the wider market sent shockwaves through computer industry for the first time for in well over the decade for AMD.
Although well received at launch, the Ryzen platform did have the sort of early teething problems that you would expect from any first generation implementation of a new chipset range. Its strength was that it was great for any software that could effectively leverage the processing performance on offer across the multitude of cores that were being made available. The platform whilst perfect for a great many tasks across any number of market segments did also have its inherent weaknesses too which would crop up in various scenarios with one such field where its design limitations being apparent being real-time audio.
Getting to the core of the problem.
The one bit of well meaning advice that drives system builders up the wall and that is the “clocks over cores” wisdom that has been offered up by DAW software firms since what feels like the dawn of time. It’s a double edged sword in that it tries to simplify a complicated issue without ever explaining why or in what situations it truly matters.
To give a bit of crucial background information as to why this might be we need to start from the point of view that your DAW software is pretty lousy for parallelization.
That’s it, the dirty secret. The one thing computers are good at are breaking down complex chains of data for quick and easy processing except in this instance not so much.
Audio works with real-time buffers. Your ASIO drivers have those 64/128/256 buffer settings which are nothing more than chunks of time where the data is captured entering the system and held in a buffer until it is full, before being passed over to the CPU to do its magic and get the work done.
If the workload is processed before the next buffer is full then life is great and everything is working as intended. If however the buffer becomes full prior to the previous batch of information being dealt with, then data is lost and this translates to your ears as clicks and pops in the audio.
Now with a single core system, this is straight forward. Say you’re working with 1 track of audio to process with some effects. The whole track would be sent to the CPU, the CPU processes the chain and spits out some audio for you to hear.
So far so easy.
Now say you have 2 or 3 tracks of audio and 1 core. These tracks will be processed on the available core one at a time and assuming all the tracks in the pile are processed prior to the buffer reset then we’re still good. In this instance by having a faster core to work on, more of these chains can be processed within the buffer time that has been allocated and more speed certainly means more processing being done in this example.
So now we consider 2 or more core systems. The channel chains are passed to the cores as they become available and the once more the whole channel chain is processed on a single core.
Because to split the channels over more than one core would require us to divide up the work load and then recombine it all again post processing, which for real-time audio would leave us with other components in the chain waiting for the data to be shuttled back and forth between the cores. All this lag means we’d lose processing cycles as that data is ferried about, meaning we’d continue to lose more performance with each and every added core something I will often refer to as processing overhead.
Now the upshot of this means that lower clocked chips can often be more inefficient than higher clocked chips, especially with newer, more demanding software.
So for just for an admittedly extreme example, say that you have the two following chips.
CPU 1 has 12 cores running at 2GHz
CPU 2 has 4 cores running at 4Ghz
The maths looks simple, 2 X 12 beats 4 X 4 on paper, but in this situation, it comes down to software and processing chain complexity. If you have a particularly demanding plugin chain that is capable of overloading one of those 2GHz CPU cores, then the resulting glitching will proceed to ruin the output from the other 11 cores.
In this situation the more overhead you have to play with overall on each core, the less chance the is that an overly demanding plugin is going to be able to sink to the lot in use.
This is also one of the reasons we tend to steer clear of single server CPU’s with high core counts and low clock speeds and is largely what the general advice is referring too.
On the other hand when we talk about 4 core CPU’s at 4GHz vs 8 core CPU’s at 3.5GHz, in this example the difference between them in clock speeds isn’t going to be enough to cause problems with even the busiest of chains, and once that is the case then more cores on a single chip tend to become more attractive propositions as far as getting out the best performance is concerned.
So with that covered, we’ll quickly cover the other problematic issue with working with server chips which is the data exchange process between memory banks.
Dual chip systems are capable of offering the ultimate levels of performance this much is true, but we have to remember that returns on your investment diminish quickly as we move through the models.
Not only do we have the concerns outlined above about cores and clocks, but when you move to dealing with more than one CPU you have to start to consider “NUMA” (Non-uniform memory access) overheads caused by using multiple processors.
CPU’s can exchange data between themselves via high-speed connections and in AMD’s case, this is done via an extension to the Infinity Fabric design that allows the quick exchange of data between the cores both on and off the chip(s). The memory holds data until it’s needed and in order to ensure the best performance from a CPU they try and store the data held in memory on the physical RAM stick nearest to the physical core. By keeping the distance between them as short as possible, they ensure the least amount of lag in information being requested and with it being received.
This is fine when dealing with 1 CPU and in the event that a bank of RAM is full, then moving and rebalancing the data across other memory banks isn’t going to add too much lag to the data being retrieved. However when you add a second CPU to the setup and an additional set of memory banks, then you suddenly find yourself trying to manage the data being sent and called between the chips as well as the memory banks attached. In this instance when a RAM bank is full then it might end up bouncing the data to free space on a bank connected to the other CPU in the system, meaning the data may have to travel that much further across the board when being accessed.
As we discussed in the previous section any wait for data to be called can cause inefficiencies where the CPU has to wait for the data to arrive. All this happens in microseconds but if this ends up happening hundreds of thousands of times every second our ASIO meter ends up looking like its overloading due to lagged data being dropped everywhere, whilst our CPU performance meter may look like it’s only being half used at the same time.
This means that we do tend to expect there to be an overhead when dealing with dual chip systems. Exactly how much depends on entirely on what’s being run on each channel and how much data is being exchanged internally between those chips but the take home is that we expect to have to pay a lot more for server grade solutions that can match the high-end enthusiast class chips that we see in the consumer market, at least when it comes to situations where real-time related workloads are crucial like dealing with ASIO based audio. It’s a completely different scenario when you deal with another task like off line rendering for video where the processor and RAM is being system managed on its own time and working to its own rules, server grade CPU options here make a lot of sense and are very, very efficient.
To server and protect
So why all the server background when we’re looking at desktop chips today? Indeed Threadripper has been positioned as AMD’s answer to Intel’s enthusiast range of chips and largely a direct response to the i7 and i9 7800X, 7820X and 7900X chips that launched just last month with AMD’s Epyc server grade chips still sat in waiting.
An early de-lidding of the Threadripper series chips quickly showed us that the basis of the new chips is two Zen CPU’s connected together. Thanks to the “Infinity Fabric” core interconnect design it makes it easy for them to add more cores and expand these chips up through the range; indeed their server solution EPYC is based on the same “Zen” building blocks at its heart as both Ryzen and Threadripper with just more cores piled in there.
Knowing this before testing it gave me some certain expectations going in that I wanted to examine. The first being Ryzens previously inefficient core handling when dealing with low latency workloads, where we established in the earlier coverage that the efficiency of the processor at lower buffer settings would suffer.
This I suspected was an example of data transference lag between cores and at the time of that last look we weren’t certain how constant this might have proven to be across the range. Without having more experience of the platform we didn’t know if this was something inherent to the design or if perhaps it might be solved in a later update. As we’ve seen since its launch and having checked over other CPU’s in testing this performance scaling seems to be a constant across all the chips we’ve seen so far and something that certainly can be constantly replicated.
Given that it’s a known constant to us now in how it behaves, we’re happy that isn’t further hidden under-laying concerns here. If the CPU performs as you require at the buffer setting that you need it to handle then that is more than good enough for most end users. The fact that it balances out around the 192 buffer level on Ryzen where we see 95% of the CPU power being leveraged means that for plenty of users who didn’t have the same concerns with low latency performance such as those mastering guys who work at higher buffer settings, meant that for some people this could still be good fit in the studio.
However knowing about this constant performance response at certain buffer settings made me wonder if this would carry across to Threadripper. The announcement that this was going to be 2 CPU’s connected together on one chip then raised my concerns that this was going to experience the same sort of problems that we see with Xeon server chips as we’d take a further performance hit through NUMA overheads.
So with all that in mind, on with the benchmarks…
On your marks
I took a look at the two Threadripper CPU’s available to us at launch.
The flagship 1950X features 16 cores and a total of 32 threads and has a base clock of 3.4GHz and a potential turbo of 4GHz.
Along with that I also took a look at the 1920X is a 12 core with 24 threads which has a base clock speed of 3.5GHz and an advised potential turbo clock of 4GHz.
First impressions weren’t too dissimilar to when we looked at the Intel i9 launch last month. These chips have a reported 180W TDP at stock settings placing them above the i9 7900X with its purported 140W TDP.
Also much like the i9’s we’ve seen previously it fast became apparent that as soon as you start placing these chips under stressful loads you can expect that power usage to scale up quickly, which is something you need to keep in mind with either platform where the real term power usage can rapidly increase when a machine is being pushed heavily.
History shows us that every time CPU war starts, the first casualty is often your system temperatures as the easiest way to increase a CPU’s performance quickly is to simply ramp the clock speeds, although often this will also be a cause of an exponential amount of heat then being dumped into the system because of it. We’ve seen a lot of discussion in recent years about the “improve and refine” product cycles with CPU’s where a new tech in the shape of a die shrink is introduced and then refined over the next generation or two as temperatures and power usage is reduced again, before starting the whole cycle again.
What this means is that with the first generation of any CPU we don’t always expect a huge overclock out of it, and this is certainly the case here. Once again for contrast the 1950X, much like the i9 7900X is running hot enough at stock clock settings that even with a great cooler it’s struggling to reach the limit of its advised potential overclock.
Running with a Corsair H110i cooler the chip only seems to hold a stable clock around the 3.7GHz level without any problems. The board itself ships with a default 4GHz setting which when tried would reset the system whilst running the relatively lightweight Geekbench test routine. I tried to setup a working overclock around that level, but the P-states would quickly throttle me back once it went above 3.8GHz leaving me to fall back to the 3.7GHz point. This is technically an overclock from the base clock but doesn’t meet the suggested turbo max of 4GHz, so the take home is that you should make sure that you invest in great cooling when working with one of these chips.
Speaking of Geekbench its time to break that one out.
I must admit to having expected more from the multi-core score, especially on the 1950X, even to the point in double checking the results a number of times. I did take a look at the published results on launch day and I saw that my own scores were pretty much in-line with the other results there at the time. Even now a few days later it still appears to be within 10% of the best results for the chip results published, which says to me that some people do look to have got a bit of an overclock going on with their new setups, but we’re certainly not going to be seeing anything extreme anytime soon.
When comparing the Geekbench results to other scores from recent chip coverage it’s all largely as we’d expect with the single core scores. A welcome improvement from the Ryzen 1700Xs, they’ve clearly done some fine tuning to the tech under the hood as the single core score has seen gains of around 10% even whilst running at a slightly slow per core clock.
One thing I will note at this point is that I was running with 3200MHz memory this time around. The were reports after the Ryzen launch that running with higher clocked memory could help improve the performance of the CPU’s in some scenarios and it’s possible that the single core clock jump we’re seeing might prove to be down as much to the increase in memory clocks as anything else. A number of people have asked me if this impacts audio performance at all, and I’ve done some testing with the production run 1800X’s and 1700X’s in the months since but haven’t seen any benefits to raising the memory clock speeds for real time audio handling.
We did suspect this would be the outcome as we headed into testing, as memory for audio has been faster than it needs to be for a long time now, although admittedly it was great to revisit it once more and make sure. As long as the system RAM is fast enough to deal with that ASIO buffer, then raising the memory clock speed isn’t going to improve the audio handling in a measurable fashion.
The multicore results show the new AMD’s slotted in between the current and last generation Intel top end models. Whilst the AMD’s have made solid performance gains over earlier generations it has still be widely reported that their IPC scores (Instructions per clockcycle) are still behind the sort of results returned by the Intel chips.
Going back to our earlier discussion about how much code you can action on any given CPU core within a ASIO buffer cycle, the key to this is the IPC capability. The quicker the code can be actioned, then the more efficently your audio gets processed and so more you can do overall. This is perhaps the biggest source of confusion when people quote “clocks over core” as rarely are any two CPU’s comparable on clock speeds alone ,and a chip that has a better IPC performance can often outperform other CPU’s with higher quoted per clock frequencies but a lower IPC score.
So lengthy explanations aside, we get to the crux of it all.
Much like the Ryzen tests before it, the Threadrippers hold up well in the older DawBench DSP testing run.
Both of the chips show gains over the Intel flagship i9 7900X and given this test uses a single plugin with stacked instances of it and a few channels of audio, what we end up measuring here is raw processor performance by simply stacking them high and letting it get on with it.
The is no disputing here that the is a sizable slice of performance to be had. Much like our previous coverage, however, it starts to show up some performance irregularities when you examine other scenarios such as the more complex Kontakt based test DawBenchVI.
The earlier scaling at low buffer settings is still apparent this time around, although it looks to have been compounded by the hard NUMA addressing that is in place due to the multi chip in one die design that is in use. It once more scales upwards as the buffer is slackened off but even at the 512 buffer setting which I tested, it could only achieve 90% of CPU use under load.
That to be fair to it, is very much what I would expect from any server CPU based system. In fact, just on its own, the memory addressing here seems pretty capable when compared to some of the other options I’ve seen over the years, it’s just a shame that the other performance response amplifies the symptoms when the system is stressed.
AMD to their credit is perfectly aware of the pitfalls of trying to market what is essentially a server CPU setup to an enthusiast market. Their Windows overclocking tool has various options to set up some control and optimize how it deals with NUMA and memory address as you can see below.
I did have a fiddle around with some of the settings here and the creators mode did give me some marginal gains over the other options thanks to it appearing to arrange the memory in a well organized and easy to address logical group, but ultimately the performance dips we’re seeing are down to a physical addressing issue, in that data has to be moved from X to Y in a given time frame and no amount of software magic will be able to resolve this for us I suspect.
I think this one is pretty straight forward if you need to be running at below a 256 ASIO buffer, although there are certainly some arguments for mastering guys who don’t need that sort of response.
Much like the Intel i9’s before it, however, the is a strong suggestion that you really do need to consider your cooling carefully here. The normal low noise high-end air coolers that I tend to favour for testing were largely overwhelmed once I placed these on the bench and once the heat started to climb the water cooler I was using had both fans screaming.
Older readers with long memories might have a clear recollection of the CPU wars that gave us P4’s, Prescott’s, Athlon FX’s and 64’s. We saw both of these firms in a CPU arms race that only really ended when the i7’s arrived with the X58 chipset. Over the years this took place we saw ever raising clock speeds, a rapid release schedule of CPU’s and constant gains, although at the cost of heat and ultimately noise levels. In the years since we’ve had refinement and a vast reduction of heat and noise, but little as far as performance advancements, at least over the last 5 or 6 generations.
We finally have some really great choices from both firms and depending on your exact needs and price points you’re working at the could be arguments in each direction. Personally, I wouldn’t consider server class chips to be ultimate solution in the studio from either firm currently, not unless you’re prepared to spend the sort of money that the tag “ultimate” tends to reflect, in which case you really won’t get anything better.
In this instance, if you’re doing a load of multimedia work alongside mastering for audio, this platform could fit your requirements well, but for writing and editing some music I’d be looking towards one of the other better value solutions unless this happens to fit your niche.
Ryzen is finally with us and it is quite possibly one of the most anticipated chipset launches in years, with initial reports and leaked benchmarks tending to show the whole platform in very favourable light.
However when it comes to pro audio handling we tend to have different concerns over performance requirements, than tends to be outlined and covered by more regular computer industry testing. So having now had a chance to sit and work with an AMD 1700X for a week or so, we’ve had the chance to put this brand new tech through some more audio-centric benchmarking, and today we’ll take a first look at this new tech and see if its right for the studio.
AMD has developed a whole new platform with the focus based around improving low level performance and raising the “IPC” or Instructions per clock cycle figure. As ever they have been keen to keep it affordable with certain choices having been made to keep it competitive, and to some extent these are the right choices for a lot of users.
The chipset gives us DDR4 memory but unlike the X99 platform restricts us to dual channel RAM configurations and a maximum of 64GB across the 4 RAM slots which may limit its appeal for heavyweight VSL users. The is a single M.2. connection option for a high speed NVMe drive and 32 lanes for the PCIe connections, so the competing X99 solutions still offer us more scope here, although for the average audio system the restrictions above may offer little to no real downsides at least from a configuration requirements point of view.
One thing missing from the specification however that has an obvious impact in the studio is the lack of Thunderbolt support. Thunderbolt solutions require BIOS level and physical board level support in the shape of the data communication header found on Intel boards, and Thunderbolt itself is an Intel developed standard along with Apple backing. Without either of those companies appearing to be keen to licence it up front, we’re unlikely to see Thunderbolt at launch although the little to say that this couldn’t change in later generations, if the right agreements can be worked out between the firms involved.
Early testing with the drivers available to us have so far proven to be quite robust, with stability being great for what is essentially a first generation release of a new chipset platform. We have seen a few interface issues regarding older USB 2 interfaces and USB 3 headers on the board, although the USB 3 headers we’ve seen are running the Microsoft USB3 drivers, which admittedly have had a few issues over on the Intel boards with certain older USB 2 only interfaces so this looks to be constant between both platforms. Where we’ve seen issues on the Intel side, we’re also seeing issues on the AMD side, so we can’t level this as being an issue with the chipset and may prove to be something that the audio interface guys can fix with either a driver or firmware update.
Overclocking has been limited in our initial testing phase, mainly due to a lack of tools. Current windows testing software is having a hard time with temperature monitoring during our test period, with none of the tools we had available being able to report the temps. This of course is something that will no doubt resolve itself as everyone updates their software over the next few weeks, but until then we tried to play it safe when pushing the clocks up on this initial batch.
We managed to boost our test 1700X up a few notches to around the level of the 1800X in the basic testing we carried out, but taking it further lead to an unstable test bench. No doubt this will improve after launch as the initial silicon yields improve and having not seen a 1800X as yet, that may still proved to be the cherry picked option in the range when it comes to overclocking.
One of the interesting early reports that appeared right before launch was the CPUid benchmark result which suggests that this may shape up to be one of the best performing multi-core consumer grade chips. We set out to replicate this test here and the result of it does indeed look very promising on the surface.
We follow this up with a Geekbench 4 test, which itself is well trusted as a cross platform CPU benchmark and in the single core performance reflects the results seen in the previous test with it placing just behind the i7 7700K in the results chart. The multi-core this time around whilst strong looks to be sat behind the 6900K and in this instance sitting under the 6800K and above the 7700K.
So moving on to our more audio-centric benchmarks and our standard Dawbench test is first up. Designed to load test the CPU itself, we find ourselves here stacking plugin instances in order to establish the chips against a set of baseline level results. The AMD proves itself strongly in this test, placing mid-way between the cost equivalent 6 core Intel 6800K and far more expensive 6900K 8 core. With the AMD 1700X offering us 8 physical cores along with threading on top to take us to a virtual 16 cores, this at first glance looks to be where we would expect it to be with the hardware on offer, but at a very keen price point.
I wanted to try a few more real world comparisons here so first up I’ve taken the Dawbench test and restricted it to 20 channels of plugins. I’ve then applied this test over each of the CPUs we have on test, with the results appearing under the “Reaper” heading on the chart below.
The 1700X stands up well against the i7 7700k but doesn’t quite manage to match up with Intel chips in this instance. In a test like this where we’re not stressing the CPU itself or trying to overload the available bandwidth, the advantages in the low level microarchitecture tend to come to the fore and in this instance the two Intel chips based around the same platform perform roughly in line with each other, although in this test we’re not taking into account the extra bandwidth on offer with the 6900K edition.
Also on the same chart we see two other test results with one being the 8 Good Reasons demo from Cubase 8 and we tried running it across the available CPUs to gain a comparison in a more real world project. In this instance the results come back fairly level across the two high end Intel CPU’s and the AMD 1700X. The 4 core mid-range i7 scores poor here, but this is expected with the obvious lack of a physical cores hampering the project playback load.
We also ran the “These Arms” Sonar demo and replicated the test process again. This tests results are a bit more erratic this time around, with a certain emphasis looking to be placed on the single core score as well as the overall multi core score. This is the first time we see the 1700X falling behind the Intel results.
In other testing we’ve done along the way in other segments we’ve seen some of the video rendering packages and even some games exhibiting some CPU based performance oddness that has looked out of the ordinary. Obviously we have a concern here that the might be a weakness that needs to be addressed when it comes to overall audio system performance, so with this result in mind we decided to dig deeper.
To do so we’ve made use of the DAWBench Vi test, which builds upon the basic test in DAWBench standard, and allows us to stack multiple layers of Kontakt based instruments on top of it. With this test, not only are we place a heavy load on the CPU, but we’re also stressing the sub-system and seeing how capable it is at quickly handling large complex data loads.
This gave us the results found in the chart above and this starts to shine some light on the concerns that we have.
In this instance the AMD 1700X under-performs all of the Intel chips at lower buffer rates. it does scale up steadily however, so this looks to be an issue with how quickly it can process the contents of a buffer load.
So what’s going on here?
Well the other relevant information to flesh out the chart above is just how much CPU load was being used when the audio started to break up in playback.
So the big problem here appears to be inefficiency at lower buffer rates. The ASIO buffer is throwing data at the CPU in quicker bursts the lower you go with the setting, so with the audio crackling and breaking up it seems that the CPU just isn’t clearing the buffer quickly enough once it gets to around 70% CPU load at those lower 64 & 128 buffer settings
Intel at this buffer setting looks to be hitting 85% or higher, so whilst the AMD chip may have more RAW performance to hand, the responsiveness of the rest of the architecture appears to be letting it down. It’s no big secret looking over the early reviews that whilst AMD has made some amazing gains with the IPC rates this generation they still appear to be lagging slightly behind Intel in this performance metric.
So the results start to outline this as one of the key weaknesses in the Ryzen configuration, with it becoming quite apparent that the are bottle necks elsewhere in the architecture that are coming into play beyond the new CPU’s. At the lower buffer settings the test tends to benefit single core performance, with the Intel chips taking a solid lead. As you slacken off the buffer itself, more cores become the better option as the system is able to spread the load but even then it isn’t until we hit a 192 buffer setting on the ASIO drivers that the performance catches up to the intel 4 Core CPU.
This appears to be one section where the AMD performance still seems to be lacking compared with the Intel family be that due to hardware bottle necks or still not quite having caught up in the overall IPC handling at the chipset level.
What we also see is the performance start to catch up with intel again as the buffer is relaxed, so it’s clear that a certain amount of performance is still there to be had, but the system just can’t access it quickly enough when placed under heavy complex loads.
What we can safely say having taken this look at the Ryzen platform, is that across the tests we’ve carried out so far that the AMD platform has made some serious gains with this generation. Indeed the is no denying that the is going to be more than a few scenarios where the AMD hardware is able to compete and will beat the competition.
However with the bottlenecks we’ve seen concerning load balancing of complex audio chains, the is a lot of concern here that it simply won’t offer the required bang per buck for a dedicated studio PC. As the silicon continues to be refined and the chip-set and drivers are fine-tuned then we should see the whole platform continue to move from strength to strength, but at this stage until more is known about those strength and weaknesses of the hardware, you should be aware that it has both its pros and cons to consider.
It’s been a good year or so now since we’ve managed to do a proper group testing session here in office on the system side of things and with the launch of a new processor selection it often raises any number of questions regarding upgrading or even replacing older setups with the newer chipset solutions. With the launch of Intel’s new Haswell CPU’s over the weekend and rumors reaching us of AMD’s latest CPU’s getting a solid performance boost it looks to be the ideal time to carry out a round up.
During that time however the team over at DAWBench have updated and refined the basic test to allow for the performance heights that the new chips are reaching to be more easily measured. The new test doesn’t scale in quite the same fashion as the older version, so this time around it has required us to perform a full group retest to ensure everything is as accurate as possible on the chart, meaning that a number of older systems have dropped off the testing list due to the lack of available hardware or incompatibility with the newer testing environment.
The other change of note this time around is with the interface being used by us for the task itself. In the past we used an internal RME card up until the point where external interface solutions became more common place, where we retired it and moved onto the Firewire budget champ in the shape of M-Audio 1614FW for our comparative testing. Over the last few years however Firewire support has waned and so it now makes sense for us to move onto a more everyday solution and one that is within easy reach of the average user.
So with that in mind we welcome to the testing bench the USB based Native Instruments Komplete Audio 6 interface which itself weighs in at under £200 and should give a fair indication of what can be achieved by anyone with a good basic interface. Of course if you have invested in a more premium solution these scores will most likely be even better in your final setup but we hope to give people here a general idea on what can be achieved on the average DAW setup.
So without further ado, on with the stats!
(click to expand the chart)
You can click to expend the chart above and it gives us the testing results for the classic DAWBench RXC compressor test. The test puts a load on the CPU by letting us add compressor instances until the ASIO routine fails to cope and the audio breaks up.
The first thing to note is down the bottom of the chart and AMD’s inclusion on the list. It’s the first time in a few generations now where we’ve seen a AMD chip hold it’s own in the benchmarking round up and overall it has to be said as a entry level solution it could have some legs. Pulling roughly the same benchmark results as the first generation i7 solutions when dealing with audio means that it offers a solid platform to work on for a price point somewhere in the £230 region for the chip and board.
When doing the system math’s however for roughly 1/3rd more on the motherboard & CPU price you can have a i5 4670 Intel CPU and board which will give you roughly a 1/3rd more performance so the bang per buck in both setups is roughly the same at where we would choose to peg the entry level positions. It could however be argued that another £70 on what will likely be a £700 costing machine wouldn’t break the bank and could be a very worthwhile move in the long term as that 1/3rd more performance will more than likely come in handy further down the road and should be part of the consideration.
Looking further up the range we see the comparisons between the 4670K & 4770K CPU’s and their predecessors which were the chips of choice at their respective performance points in previous generations. The 4670K is another unlocked i5 solution offering 4 cores whilst the 4770K is the direct replacement for 3770K midrange champion offering up the same 4 cores +4 cores of hyperthreading that have been available in the previous generations.
For ease of comparison we made sure to test the key chips at both stock settings and with a fairly average overclock applied so you can see how they scale with the extra clock speed boost being applied. Even through the CPU’s don’t appear to overclock quite as far this time around we do see a fairly level increase in performance at around the 5% – 7% across the board when examining like for like CPU’s meaning that whilst not major game changers they do offer a step up on the previous generation.
Regarding the chipset itself the big push this time by Intel has been the improvement of power saving features within the chipset and on the CPU itself. The inclusion of more C states which allow the PC to pretty much shut everything off when it conserves power is likely to be another major headache for audio system builders both pro and amateur alike so keep an eye on those and give them some consideration when tweaking up your rigs.
The CPU microarchitecture has also been worked upon and whilst a lot of the changes are a bit more technical than we’d want to go into on article focused on audio applications, the expansion to the AVX2 instruction set may yield us further improvements in performance if software developers can make use of the improvements implemented in the Haswell release further along the line. We don’t expect it to be a quick process as it doesn’t make sense to focus on instruction tuning until it is supported by both Intel and AMD but we expect that to happen over the course of the coming year and once it does software companies often start to make use of the features in major updates which could be a nice benefit to those adopting the platform.
Other benefits for adopters of the new platform include an increase of USB 3.0 ports available natively in the chipset (6 rather than the previous 4) and more Sata 6Gps ports which now total 6 natively over the previous generations 2 port solutions.
So where does that leave us? Not much different from before the launch of the new CPU’s with performance scaling with cost right up to the hexcore 3930K chips on a pretty reasonable cost to performance curve. The current highend extreme in the shape 3970X however continues to break that curve rather abruptly although this is something most users have come to expect and thankfully it is only the most demanding of users that will even need to consider that solution as the rest of the range offers a lot of performance which will satisfy the vast majority of current requirements.
The future promises us a new high end platform later in the year in the shape of IvyBridge extreme, although details and release dates are still very hazy we’re looking forward to getting to grips with those when they do eventually land. Right now through the Haswell solutions offer a great upgrade for any users of the first generation i series CPU’s (the 4th generation 4770k offers twice the performance in benchmarking of a first generation i7 920) or earlier solutions and continue to dominate their respective price points in the performance stakes.
So you’ve decided to power your studio with a new PC for music production but where do you start with it? Why exactly would you choose to build your own music PC or order a custom system over a standard off the shelf solution?
To answer that we have to distil the requirements of what is required from a music PC system and we find that for most people the 3 key requirements are stability, performance and silence.
Stability is an obvious must for a production studio music PC. The is nothing worse than being in a recording session and watching a few hours of your bands performance or the last hours worth of sound design disappearing into the digital ether because of a system being overloaded and rebooting whilst being pushed a bit too hard.
Performance in the audio system field always comes down to “more is more” and more power under the hood of your production system will result in more plug ins, more audio channels and more options when you are recording and mixing your music in the studio.
This leaves us with the third requirement which is silence. If you’ve ever tried recording in a space which has a noisy music PC nearby and it’s fans have been screaming away then it’ll no doubt make recording music a very tricky process as those sensitive mic’s tend to pick up this type of irritating background noise. Also when your mixing down you need to be fully focused on the mix, and having a low level background noise will clutter up the frequencies you perceive which in turn will make getting the levels right far harder than it needs to be.
The three all balance themselves out when trying to build the perfect recording and editing music PC and should be thought about carefully if your building yourself.
Overclocking and getting the most from your music PC.
With the last few generations of CPU’s overclocking has moved out of the enthusiasts market segment and become almost de rigueur for those wishing to get their moneys worth from any new production studio setup and whilst it’s hard to argue against this course of action when even Intel and AMD have started to use this as a feature when marketing their CPU’s, it is however important to consider the consequences and how it’ll trade off against the other two factors in our music PC production system trinity.
If we look at the benchmark’s we have produced here we see that over clocked performance can lead to 30% or more performance boosts on the current generation music PC setups over stock scores even at reasonably safe levels of pushing the audio production system. You tend to find that when overclocking you have a fair amount of headroom before you have to start raising the voltages from stock levels which is where the problems arise. Indeed you may even find that at stock speeds you may be able to drop the voltage levels the system uses whilst it’s running which can prove quite worth while.
Why is that?
Heat is the result of increased performance and in turn it affects both stability and silence. Run more voltage through the setup and the music computer system runs hotter, although if you can run with less voltage you’ll find the reverse and less heat being generated. Most music computer system setups will tend to have a sweet spot where the CPU will run on still fairly close to stock voltages whilst still being nicely over clocked but should you attempt to push it even 1% past this sweet spot you’ll need a large jump in voltages to hold it steady, which will in turn cause more heat and either make your music PC very noisy as the fans ramp up or a loss in performance as it overheats and throttles the chips back.
Consider the whole system when choosing parts.
So stability and overclocking aside choosing a good selection of components in a audio production system can be a very wise move. We’ve all seen computer systems where corners have been cut and BSOD’s tend to occur and the PC platform can be a bit notorious for this but it can be avoided. Careful research of the components being used in a music PC system can ensure less headaches down the road and it’s never wise to cut corners in these regards.
In fact just as an example its the parts that people don’t tend to think about that can have the most impact and one of the most overlooked one’s can be the PSU which is pretty much the key to a good stable audio system having a long trouble free working life. PSU’s vary wildly in price even at the same overall performance levels and research is highly advised because that cheaper unit might be noisy, or worse it might not have stable voltages on the rail supplying your motherboard or sound card solution meaning at best they might hang randomly or at worse even burn out from fluctuations. Good motherboards and PSU’s will regulate the power well and have more protection built in but these will cost extra, although the first time you see a £30 PSU burn out half a PC from a power spike you suddenly realize that the £70 investment in a PSU unit that would have had an protection circuit or two to protect everything else wouldn’t have been such a bad idea after all.
So this takes us onto the silence part of the equation. This can be ignored to some degree if your lucky enough to have a isolation cupboard for the music system, or even able to position it in another room away from your recording and mixing setup.
This however can be a critical factor for those who are not so lucky although the good news is that with a bit of thought and planning you can put together a music PC system that isn’t going to ruin your working environment. Choosing a case with a good effective front to back air flow can help a lot and the are many quiet options available now with good solid construction and sound proofing as standard.
Choosing your fan selection well with a trade off between sound levels and air pressure being the foremost concern can mean the difference between whisper quiet and screaming annoyance so once again it’s very important to read up on your options before choosing a final audio system specification. You have to bare in mind at this point that overclocking add’s heat and heat will cause instability if left unchecked which can be a reasonable arguement not to overclock music PC’s that need to be simply fitted and relied on to work day in and day out. Faster fans solve this issue but cause more overall noise so getting the sweat spot between the 3 is the key to getting the most out of your new studio PC.
In fact if your building your own music production system then good research is the key and the are many great sites out there to guide you through the process of selecting, building and even trouble shooting your new studio PC. That said even if you go and purchase a custom audio PC solution it’s worth researching the parts going in the music system yourself so that your aware of any potential issues that may exist with the kit already in your recording studio setup.
We test and develop our solutions here with all this in mind, so wheather your looking to spec up and purchase a new audio production system or even build your own you can speak to us and we can advise you on the best solution for your requirements if your buying parts to self build or tailor a complete music system solution that is right for you.
The second half of 2011 has seen some high profile CPU releases in the form of both the AMD Bulldozer series and the new highend Intel SandyBridge Extremes. Both platforms offer us Hexcore solutions with additional benefit of inclusion of the AVX extensions which whilst enjoying modest support already (Sonar’s inclusion of the extensions has been widely reported), looks like it could be important as more and more firms adopt and optimize with their software to support this functionality.
So a brief overview of our findings.
The AMD Bulldozer Dawbench results surprised us and not in a good way. Performance for this new generation of CPU has been lackluster at best and in a surprising result performance wasn’t much improved over the previous Phenom X6 series CPU and even fell behind it in some testing. The shared cache in the AMD Bulldozer design we suspect could be involved here bottle necking the CPU but either way it does seem that this CPU’s design isn’t ideal for audio usage.
The Intel Sandybridge Extremes however continue to push forward performance wise in the DAWBench testing and we see some great performance gains in the initial testing. At stock the isn’t much in it with a overclocked 2600k and this might still be the better option for a lot of users but the X79 boards do permit you to make use of a lot of extra memory slots (the board allow upto 8 memory sticks) if you pick up the right model which allows those working with film and TV scores to have access to upto 64GB’s of memory, so ideal for people running programs like VSL or large EW sound banks.
The initial testing of an overclocked Sandybridge Extreme 3930k does show some astounding gains when over clocked with 30% – 40% across the board, this could make these CPU’s reasonable value for money. Unfortunately our initial testings has been done on the B2 release CPU’s which are running a bit hot when pushed to this level of performance. Intel has announced a refined CPU revision (the C2) late January 2012, so we expect to be offering an over clocked edition offering this performance gains around the start of February all being well. Of course we shall publish updated results from our testing as and when it is carried out.
For further information on DAWBench and how we test please see this article.