All posts by Pete

NAMM 18 – Presonus Expansions For The StudioLive Range

Presonus this year seem focused on building upon their popular range of studio live desks, with a number of new I/O expansion options to help flesh out your setup.

Presonus NSB 8.8
The Presonus NSB 8.8
Presonus NSB 16
Presonus NSB 16.8

Designed to work seamlessly with the Presonus StudioLive Series 3 mixers, these two stage boxes allow for easy routing of your audio over standard cat5 or cat6 cable, allowing for less signal degradation over long runs and eliminating large heavy snake runs around the venue. 

When setup alongside a Series 3 console, the stage boxes also allow you to remote control them straight from the desk or even from the dedicated touch control app. Featuring locking combo mic/line inputs, XMAX preamps and a pair of AVB connections to allow you daisy chain more units as required, meaning these boxes offer you the flexibility to route and patch your stage to accommodate pretty much any show.

StudioLive 24R Front
StudioLive 24R Front
StudioLive 24R Rear
StudioLive 24R Rear

Taking it up a level, the 24R and 32R are more stage box offerings that also function as sub-mixers as well, making ideal for doing secondary monitoring mixes on stage too. Once again you have the full remote mixing capability when hooked up to a series 3 studio live desk allowing you to fully control and customize your scenes and making it just as suitable for long-term shows and installs, as it is for going on tour.

Presonus Earmix
Presonus Earmix 16M

Lastly, we have the EarMix 16M which does away with the stage box functions, making it a hands-on and easy to control 16-way headphone submixer and amplifier. 

Each EarMix 16M accepts 16 mono channels of input via AVB networking and can network multiple units with a StudioLive Series III mixer. This allows you to setup up your own custom onstage mix in the headphones, and the line out options also allows you to feed this through to your foldback setup, for a fully custom monitoring arrangement.

Presonus EarMix 16M Rear
Presonus EarMix 16M Rear

Take a look at the whole Presonus range @ Scan

NAMM 18 – UA let the Arrow fly.

UA’s big announcement came a week before NAMM this year and is small, possibly one of their smallest yet perhaps, but all the better for it.

UA Arrow Top Down

The Arrow is a Thunderbolt 3 ready audio interface with an included core for handling UA plugins onboard. Fitted out with a conversion section derived from UA’s flagship Apollo interface range, as well as a headphone amp with a clean punchy output, makes this diminutive audio interface an astounding portable recording solution to go alongside any laptop.

UA Arrow Front

For the I/O we see an instrument input and headphone output on the front of the interface along with 2 more inputs and the main outputs around the rear.

UA Arrow Rear

The interface ships with the UA “Realtime Analog Classics” bundle, which offers a number of classic EQ’s, Compressors and guitar amp models and because the plugins are run on the UA, this allows for near-zero latency tracking through the interface with the effects applied and makes it just as ideal for live performances as it does for capturing your session. 

This looks to be the cheapest entry point into the UA ecosystem so far and the perfect device for leveraging your plugs when out of !the studio and on the road. If you’re already a user of those plugins and want to take them on a mobile adventure or perhaps just want to dip your knee into the pool, this could be the interface for you.

The UA Arrow – Available Now

UA Hardware @ Scan

NAMM 18 – The new T-Series speakers from ADAM Audio

With the surprise discontinuation of the ever popular Adam entry level “F-Series” late in 2017, we’ve heard the questions regarding what’s next a number of times now. Well, NAMM this year gives us the answer and simply put, it’s cheaper and better!

Pre-launch discussion has seen talk of these arriving at an even more accessible price point than their predecessors, whilst still offering the same legendary ribbon tweeter design in the form of the implemented U-ART solution. On top of that, its advised that also included is a newly designed polypropylene symmetrical-excursion woofer that extends bass to lower frequencies with less distortion than normally found in products around the price point.

Multiple technical innovations designed for ADAM’s flagship S Series and iconic AX Series have been included in this new lower-priced monitor range, including ADAM’s High-Frequency Propagation System (HPSTM) designed to offer a consistent horizontal and vertical dispersion across the frequency spectrum and promised DSP innovations for the crossover system.

Adam t7 front

Offering a 5″ woofer on the T5 model, along with a 7″ woofer on the T7  edition and offering a low-end extension of 45Hz and 39Hz, as well as the frequency response reaching up to 25kHz on both models, the specs on the page make them look like a solid upgrade when considered against the older F series specs.

Adam T7 Rear

Couple this with their almost stealth fighter black finish and angles to match on the front panel, they certainly also look the part with their classic Adam design features.

All that said, if they can hit the price points that they are seeking to meet, then we may see an unexpected shake-up at the low end of the speaker market this year, and these should certainly be the listening short list for anyone considering their first set of studio speakers.

Check out the full ADAM Audio range @ Scan

The Impact Of Meltdown And Spectre For Audio Workstations

No doubt, the hottest topic in I.T. at the start of 2018 continues to be the CPU security risks that have come to light as 2017 came to a close.

Otherwise known as “Spectre” and “Meltdown ” an exhaustive amount of information has been written already in regards to how these design choices can lead to data being accessed within the computer by processes or other code that shouldn’t have access to it, potentially leaving the system open to attacks by malicious code run on the computer.

For instance one of the more concerning attack vectors in this scenario are servers hosting multiple customers on one system, and in a world where it might be common to hear about many virtual machines being used in a hosting environment in order to keep them separate and secure, allowing this type of code to access the data with poor security in place opens up the possibility of transaction details, passwords and other customer records in a manner that has obviously raised a large amount of concern in both security professionals and end consumers alike.

Off the back of this have emerged the patches and updates required to solve the issue, and along with those are some rather alarming headline figures regarding performance levels potentially taking a hit, with claims of anywhere up to 30% overhead being eaten away by certain types of workload.

As there are many great resources already explaining this including this one here that can help outline what is going on, I’m not going to delve too much into the background of the issues, rather focus on the results of the updates being applied. 

We’re going to look at both the Microsoft patch at a software level and test the BIOS update released to support it. There are two issues here with Meltdown and Spectre and there happens to be two variants of Spectre, one of which can be handled at the software level, with the other requiring the microcode update applied via a BIOS update.

Microsoft has, of course, released their own advisory notes which are certainly worth a review too and available here.  At this time it is advised that Meltdown and all Spectre variants can both affect Intel CPU’s and some ARM compatible mobile chips, whereas AMD is only affected by the Spectre variants with AMD themselves having just issued an updated advisement at the time of writing which can be found here. This is also largely an  OS platform agnostic issue with Microsoft, Apple, Linux and even mobile OS’s all having the potential to be affected and over the last few weeks rapidly deploying updates and patches to their users.

At this point, I’m just going to quote a portion taken from the Microsoft link above verbatim, as it outlines the performance concerns we’re going to look at today. Keep in mind that in the text below “variant 1 & 2” are both referring to the Spectre issues, whereas Meltdown is referred to as simply “variant 3”.

One of the questions for all these fixes is the impact they could have on the performance of both PCs and servers. It is important to note that many of the benchmarks published so far do not include both OS and silicon updates. We’re performing our own sets of benchmarks and will publish them when complete, but I also want to note that we are simultaneously working on further refining our work to tune performance. In general, our experience is that Variant 1 and Variant 3 mitigations have minimal performance impact, while Variant 2 remediation, including OS and microcode, has a performance impact.

Here is the summary of what we have found so far:

  • With Windows 10 on newer silicon (2016-era PCs with Skylake, Kabylake or newer CPU), benchmarks show single-digit slowdowns, but we don’t expect most users to notice a change because these percentages are reflected in milliseconds.
  • With Windows 10 on older silicon (2015-era PCs with Haswell or older CPU), some benchmarks show more significant slowdowns, and we expect that some users will notice a decrease in system performance.
  • With Windows 8 and Windows 7 on older silicon (2015-era PCs with Haswell or older CPU), we expect most users to notice a decrease in system performance.
  • Windows Server on any silicon, especially in any IO-intensive application, shows a more significant performance impact when you enable the mitigations to isolate untrusted code within a Windows Server instance. This is why you want to be careful to evaluate the risk of untrusted code for each Windows Server instance, and balance the security versus performance tradeoff for your environment.

For context, on newer CPUs such as on Skylake and beyond, Intel has refined the instructions used to disable branch speculation to be more specific to indirect branches, reducing the overall performance penalty of the Spectre mitigation. Older versions of Windows have a larger performance impact because Windows 7 and Windows 8 have more user-kernel transitions because of legacy design decisions, such as all font rendering taking place in the kernel. We will publish data on benchmark performance in the weeks ahead.

The testing outlined here today is based on current hardware and Windows 10. Specifically, the board is an Asus Z370 Prime A, running on a Samsung PM961 M.2. drive, with a secondary small PNY SSD attached. The CPU is an i5 8600 and the is 16GB of memory in the system.

Software wise updates for windows were completed right up to the 01/01/18 point and the patch from Microsoft to address this was released on 03/01/18 and is named “KB4056892”. I start the testing with the 605 BIOS from late 2017 and move through to the 606 BIOS designed to address the microcode update specified by Intel. 

Early reports have suggested a hit to the drive subsystem, so at each stage, I’m going to test this and of course, I’ll be monitoring the CPU performance as each step is applied. Also keep in mind that as outlined in the Microsoft advisory above, different generations of hardware and solutions from different suppliers will be affected differently, but as Intel is suggested as being the hardest hit by these problems, it makes sense to examine a current generation to start with.

The Testing 

Going into this, I was hopeful that we wouldn’t be expecting to see a whole load of processing power lost simply due to the already public explanations of how the flaw could potentially affect the system didn’t read as being one that should majorly impact the way an audio system handles itself.

Largely it’s played out as expected, as when you’re working away within your sequencer the ASIO driver is there doing its best to keep itself as a priority and generally, if the system is tuned to work well for music, the shouldn’t be a million programs in the background that should be affected by this and causing the update to steal processing time. So, given we’re not running the sort of a server related workloads that I would expect to cause too much of an upset here, I was fairly confident that the impact shouldn’t be as bad as some suggestions had made out and largely on the processing side it plays out like that. 

However, prior to starting the testing, it was reported that storage subsystems were taking a hit due to these patches and that of course demanded that we take a look at it along the way too. Starting with the worst news first, those previous reports are very much on the ball. I had two drives connected and below we see the first set of results taken from a Samsung M.2. SM961 model drive.

M2 Test After Applying Meltdown Changes
Click to expand – Results From Left To Right – 1, Baseline Result. 2, After Microsoft Update. 3, With update and BIOS patch applied.

To help give you a little more background on what’s being tested here, each test should be as follows:

  • Seq Q32T1 – sequential read/ write with multiple threads and queues
  • 4K Q32T1 – random read/ write with multiple threads and queues
  • Seq – sequential read/ write with a single queue and thread
  • 4K – random read/ write with a single queue and thread.

The is no doubt a performance hit here to the smaller 4k files which are amplified as more threads are taken up to handle the workload in the 4K Q32T1 test.  On the flip side of this is that the sequential handling seems to either escape relatively unscathed and in some instances even improved to some degree, so there is some trade-off here depending on the workload it’s handling.

The gains did confuse me at first and whilst first sifting through the data I started to wonder if perhaps given we were running off the OS drive, and perhaps other services had skewed it slightly. Thankfully, I also had a project SDD hooked up to the system as well, so we can compare a second data point against the first.

SSD meltdown testing
Click to expand, Results left to right. 1, Baseline. 2, After Microsoft Patch. 3, After BIOS update.

The 4k results still show a decrease and the sequential once again hold fairly steady with a few read gains, so it looks like some rebalancing to the performance levels has taken place here too, whether intentional or not.

The DAWBench testing, on the other hand with the DSP test, ends up with a more positive result.  This time around I’ve pulled out the newer SGA based DSP test, as well as the Kontakt based DAWBench VI test and both were run within Reaper. 

SGA test for Meltdown
Click to expand

The result of the DSP test which concentrates on loading the CPU up shows little difference that doesn’t fall within the margin of error & variance.  It should also be noted that the CPU was running at 99% CPU load when it topped out, so we don’t appear to be losing any overhead here in that regard.

DAWBench VI test for Meltdown
Click to expand.

With the Kontakt based DAWBench VI test, we’re seeing anything between 5% and 8% overhead reduction depending on the ASIO buffer, with the tightest 64 buffer suffering after each update whereas the looser settings coped better with the software update before taking a small hit when we get up to the 256 buffer.

The Verdict

Ultimately the concern here is how will it impact you in real terms?

The minor loss of overhead on the second testing set was from a Kontakt heavy project and the outcome from the drive tests would suggest that anyone with sample library that has a heavy reliance on disk streaming may wish to be careful here with any projects that are already on the edge prior to the update being applied.

I also timed that project being loaded across all 3 states of the update process as I went with the baseline time frame to open the project being 20 seconds. After the software update, we didn’t see a change in this time span, with the project still taking 20 seconds to open. However, the BIOS update once applied along with the OS update added 2 seconds to this giving us roughly a 10% increase in the project load time.

So at this time, whilst any performance is certainly not welcome, we’re not seeing quite the huge skew in the performance stakes that has been touted thankfully, and certainly well short of the 30% figure that was being suggested initially for the CPU hit.

There have been suggestions by Microsoft that older generations might be more severely affected and from the description of how it affects servers I suspect that we may well see that 30% figure and even higher under certain workloads in server environments, but I suspect that it’ll be more centered around the database or virtual machine server workstation segments than the creative workstation user base.

Outside of our small corner of the world, TechSpot has been running a series of tests since the news broke and it’s interesting to see other software outside of the audio workstation environment seems to be largely behaving the same for a lot of end users, as are the storage setups that they have tested. If you’d like to read through those you can do so here.

Looking Forward

The issue was discovered over the course of 2017 back but largely kept under wraps so it couldn’t be exploited at the time. However, the existence of the problem leaked before the NDA was lifted and feels like a few solutions that have been pushed out in the days since may have been a little rushed in order to stop anyone more unethical capitalizing upon it. 

As such, I would expect performance to bounce around over the next few months as they test, tweak and release new drivers, firmware and BIOS solutions. The concern right now for firms is ensuring that systems around the world are secure and I would expect there to be a period of optimization to follow once they have removed the risk of malware or worse impacting the end user.

Thankfully, it’s possible to remove the patch after you apply it, so in a worst case scenario you can still revert back and block it should it have a more adverse effect upon your system, although it will then leave you open to possible attacks. Of course, leaving the machine offline will obviously protect you, but then that can be a hard thing to do in a modern studio where software maintenance and remote collaboration are both almost daily requirements for many users. 

However you choose to proceed, will no doubt be system and situation specific and I suspect as updates appear the best practice for your system may change over the coming months. Certainly, the best advice I can offer here is to keep your eye on how this develops, make the choices that keep you secure without hampering your workflow and review the situation going forward to see if further optimizations can help restore the situation to pre-patch levels as a resolve for the problem is worked upon by both the hardware and software providers. 

The Scan Pro Audio Picks Of The Year 2017

At this time of year, there is one thing that is as inevitable as the papers proclaiming that an incoming weather front is going to cause the end of the world (again) and that is, of course, the annual end of year retrospective lists.

Not to be left out, we have here five bits of kit that stood out for us over the course of the year and more importantly, just why that might have been. In fact, some of this kit proved to be slow burning in earning the teams support and admiration so that in itself lets us take a slightly longer-term view of the gear in hand.

Presonus Quantum

Presonus over the last few years have managed to elevate their brand through the highly praised Studio One software continuing to grow in popularity as many users sequencer of choice. Their audio interfaces, however, have been a mixed bag and up until now, with little to help them stand out from the crowd.

That was until their entry into the Thunderbolt foray brought us this little gem.

Presonus Quantum Front Panel

Goalposts were moved and expectations were raised as Presonus brought us this absolute winner. Extremely low latency times that challenge the very best out there, plenty of I/O, great conversion and a respectable signal path throughout the interface means that this is frankly a great all-around package.

Hedd 05

From Klaus Heinz, the original designer behind the ADAM speaker range comes the company HEDD (Heinz Electrodynamic Designs) and their own range of studio speakers. Built around the same crystal clear AMT-based tweeters that Klaus has always favoured in the past, but with refinements to the sound that clearly illustrate that these are certainly their own thing.

Whilst they have the larger HEDD 20’s & 30’s in the range, it’s the entry-level 05’s that we’ve picked here. They have superb balance and depth to them, and frankly for the size an astounding bass representation. The 05’s themselves stand up well against speakers many times their price and the slightly larger 07’s do little more than add a bit more depth to the sound with a few more notes at the bottom end of the scale but still retain the first rate tonal balance found on the smaller edition.

We’ve been so blown away by these a few of us have even taken sets home as secondary pairs for our own studios, so they’ve certainly left a lasting impression.

Friedman Sir Compre

Friedman is best known for creating high-end rock amps with a Vintage Classic Rock tone inspired by British tube amps from the 60’s and 70’s, and similarly, their pedal range is also shaped by this legacy. In this instance, the Sir Compre isn’t a pedal that sets out to emulate and given amplifier, rather a compressor with a very subtle overdrive circuit built in, adding body a little bit of grit to your sound.

With a bit of tweaking it’s very easy to get a wide selection of classic rock tones and as our team noted if you want to nail that classic 70’s rock sound, it’s extremely easy to do so with this pedal and we’ve already found it perfect for recreating the tones found on the classic “Rock Steady” by Bad Company.

Novation Peak

Novation is a company with a bit of a history of producing great, affordable synths and this one is certainly a quality all-rounder.

A digital subtractive synth at its heart with added wavetable and FM synthesis possibilities all being fed into an analogue filter. Now add in a couple of LFO’s along with a matrix modulation table offering 16 routable slots and a CV gate for the more adventurous and we have one very well featured synth and it’s easy to see why Tom enjoyed himself so much when he finally got his hands on one.

Check out the video below to see Tom getting to grips with it.

Audeze MX4

Quite possibly on the best studio oriented headphones currently available, they simply need to be heard to be believed.

Based upon Audeze’s planar magnetic driver design, the MX4’s continue to improve on the previous flagship LCD-4’s by offering a new durable magnesium housing along with a carbon fibre headband design that makes them 30% lighter overall, helping to ensure comfort during those longer studio sessions.

A premium product with a premium price tag, but capable of delivering a level of sonic quality that rivals speakers 2 or 3 times its price, making them the ultimate secret weapon for many a mastering engineer.

Audeze MX 4

Intel i9 7940X & 7960X Dawbench Testing.

Today we have a few more models from the Intel i9 range on the desk in the shape of the 14 core 7940X and the 7960X. I was hopeful that the 18 core would be joining them as well this time around, but currently, another team here have their hands on it so it may prove to be a few weeks more until I get a chance to sit down and test that one.

Now I’m not too disappointed about this as for me and possibly the more regular readers of my musings, the 16 core we have on the desk today already is threatening to be the upper ceiling for effective audio use.

The reason for this is that I’ve yet to knowingly come across a sequencer that can address more than 32 threads effectively for audio handling under ASIO. These chips offer 28 and 32 threads respectively as they are hyper-threaded, so unless something has changed at a software level that I’ve missed (and please contact me if so), then I suspect at this time the 16 core chip may well be well placed to max the current generation of sequencers.

Of course, when I get a moment and access to the larger chip, I’ll give it a proper look over to examine this in more depth, but for the time being on with the show!

Both chips this time around are advising a 165W TDP figure, which is up from the 140W TDP quoted back on the 7920X we looked at a month or two back. The TDP figure itself is supposed to be an estimate of the power usage under regular workloads, rather than peak performance under load. This helps to explain how a 14 core and 16 core chip can both share the same TDP rating, as the 14 core has a higher base clock than the 16 core to compensate. So in this instance, it appears that they have to some degree picked the TDP and worked backward to establish the highest performing, clocks at that given power profile point.

Once the system itself starts to push the turbo, or when you start to overclock the chip the power draw will start to rise quite rapidly. In this instance, I’m working with my normal air cooler of choice for this sort of system in the shape of the BeQuiet Dark Rock Pro 3 which is rated at 250W TDP.  Water-loop coolers or air coolers with more aggressive fan profiles will be able to take this further, but as is always a concern for studio users we have to consider the balancing of noise and performance too.

Much like the 7920X, we looked at previously, the chips are both rated to a 4.2GHz max two core turbo, with staggered clocks running slower on the other cores. I took a shot at running all cores at 4.2GHz but like the 7920X before it we could only hit that on a couple of cores before heat throttling would pull them back again. 

Just like the 7920X again however if we pull both of these chips back by 100MHz per core (in this instance both to 4.1GHz) they prove to be stable over hours of stress testing and certainly within the temp limits we like to see here, so with that in mind we’re going to test at this point as it’s certainly achievable as an everyday setting.

As always first up is the CPUid chip info page and benchmarks along with the Geekbench results.

Intel i9 7940X @ 4.1GHz

7940x CPUid 7940x CPUid Benchmark

Geekbench 7940x

Intel i9 7960X @ 4.1GHz

 

7960x CPUid7960x CPUid Benchmark

 

Geekbench 4 7960X

Both chips are clocked to the same level and the per-core score here reflects that. The multi-core score, of course, offers a leap from one chip to the other as you’d expect from throwing a few more cores into the equation.

Geekbench Comparison Chart
Click To Expand

The DAWBench classic and newer DSP test with Kontakt follow this and once again as there isn’t a whole lot I can add to this. 

DAWBench Classic
Click To Expand

 

7960x DB6
Click To Expand

The added cores give us improvements across both of these chips as we’ve already seen in the more general purpose tests. The 7960X does appear to offer a slightly better performance curve at the higher buffer rates, which I suspect could be attributed to the increase in the cache but otherwise, it all scales pretty much as we’d expect. 

Given the 7940X maintains the roughly £100 per extra core figure (when compared to the 7920X) at current pricing that Intel was aiming for at launch, it does seem to offer a similar sort of value proposition as the smaller i9’s just in this case more is more. The 7960X raises this to roughly £125 per core extra over the 7940X at current pricing, so a bit of cost creep there but certainly not as pricey as we’ve sometimes seen over the years on the higher end chips in the range.

The main concern initially was certainly regarding heat, but it looks like the continued refinement of the silicon since we saw the first i9 batches a few months ago has given them time to get ahead of this and ensure that the chips do well out of the box given adequate cooling.  

With the launch of the CoffeeLake’s in the midrange, some of the value of the lower end enthusiast chips appear to have quickly become questionable, but the i9 range above it continues to offer performance levels henceforth unseen by Intel. The’s a lot of performance here, although the price matches it accordingly and we often find ourselves at this time where more midrange level systems are good enough for the majority of users.

However, for the power user with more exhaustive requirements who find that they can still manage to leverage every last drop of power from any system they get their hands on,  I’m sure there will plenty here to peak your interest.

Previous CPU Benchmarking Coverage
3XS Systems @ Scan

First Look At The Intel 8700K As The i7 Range Gets A Caffeine Injection.

I’ll be honest, as far as this chipset naming scheme goes it feels that we might be starting to run out of sensible candidates. The Englishman in me wants to eschew this platform completely and hold out for the inevitable lake of Tea that is no doubt on the way. But alas the benchmarking has bean done and it’s too latte to skip over it now. 

*Ahem* sorry, I think it’s almost out of my system now. 

Right, where was I? 

Time To Wake Up and Smell The….

Coffee Lake has been a blip on the horizon for quite a while now, and the promise of more cores in the middle and lower end CPU brackets whilst inevitable has no doubt taken a bit longer than some of us might have expected. 

Is it a knee-jerk reaction to the AMD’s popular releases earlier this year? I suspect the platform itself isn’t, as it takes a lot more than 6 months to put together a new chipset and CPU range but certainly it feels like this new hardware selection might be hitting the shelf a little earlier than perhaps was originally planned.

Currently its clear that we’ve had a few generations now where the CPU’s haven’t really made any major gains other than silicon refinement and our clock speeds haven’t exceeded 5GHz from the Intel factory (of course, the more ambitious overclockers may have had other ideas), the obvious next move for offering more power in  the range would be to stack up more cores much like the server-based bredrin in the Xeon range.

What is undeniable is that it certainly appears even to the casual observer that the competitor’s recent resurgence has forced Intel’s  hand somewhat and very possibly accelerated the release schedule of the models being discussed here.

I say this as the introduction of the new range and i7 8700K specifically that we’re looking at today highlights some interesting oddities in the current lineup that could be in danger of making some of the more recent enthusiast chips look a little bit redundant. 

This platform as a whole isn’t just about an i7 refresh though, rather we’re seeing upgrades to the mainstream i7’s, the i5’s and the i3’s which we’ll get on the bench over the coming weeks.

The i7’s have gained 2 additional physical cores and still have the hyperthreading meaning 12 logical cores total. 

The i5’s have 6 cores and no hyperthreading.

The i3’s have 4 cores and no hyperthreading.

Positioning wise Intel’s own suggestions have focused towards the i5’s being pushed for gaming and streaming with up to 4 real physical cores being preferred for games and then a couple extra to handle the OS and streaming. The i3’s keep their traditional entry-level home office and media center sort of positioning that we’ve come to expect over the years and then that gives us the 6 core i7’s sat at the top of the pile of the more mainstream chip options. 

Intel traditionally has always found itself a little lost when trying to market 6 cores or more. They know how to do it with servers where the software will lap up the parallelization capabilities of such CPUs with ease. But when it comes to the general public just how many regular users have had the need to leverage all those cores or indeed run software that can do it effectively? 

It’s why in recent years there has been a marked move towards pushing these sorts of chips to content creators and offering the ability to provide the resources that those sort of users tend to benefit from. It’s the audio and video producers, editors, writers and artists that tend to benefit from these sorts of advances. 

In short, very likely you dear reader.

Ok, so let’s take a look at some data.

8700K CPUz at 4.7GHz

CPUz 4.7 Benchmark

At base clock rates the chip itself is sold as a 6 core with Hyper-threading and runs with a clock speed of 3.7GHz and a max turbo of 4.7GHz. For testing, I’ve locked off all the cores to the turbo max and tested with a Dark Rock 3 after testing various models before starting. With the cooler in hand, it was bouncing around 75 degrees after a few hours torture testing which is great. I did try running it around the 5GHz mark, which was easy to do and perfectly stable, although with the setup I had it was on the tipping point of overheating. If you updated it to a water cooling loop I reckon you’ll have this running fine around the 5GHz and indeed I did for some of the testing period with no real issues, although I did notice that the voltages and heat start to creep up rapidly past the 4.7GHz point.

8700k at 4.7

Geekbench 4 8700K
Click to expand

The Geekbench 4 results show us some interesting and even slightly unexpected results. With the previous generation 7700K being clocked to 4.5GHz when I benched it and the 8700K being run at 4.7GHz I was expecting to see gains on the single core score as well as the increase in the multicore score. It’s only a few percent lower and I did retest a couple of times and found that this was repeatable and I had the results confirmed by another colleague.

The multicore score, on the other hand, shows the gains that this chip is all about with it not only exceeding the previous generation as you would expect with more cores being available. The gains here, in fact, highlight something I was already thinking about earlier in the year when the enthusiast i7’s got a refresh, in that this chip looks to not only match the 7800X found in the top end range but somewhat exceeding its capabilities at a lower overall price point.

DAWBench DSP 8700K
Click to expand
DawBench vi 8700K
Click to expand

In the testing above both the DAWBench DSP and the DAWBench vi tests continue to reflect this too, effectively raising questions as to the point of that entry-level 7800X in the enthusiast range.

The is almost price parity between the 7800X and 8700K at launch although the X299 boards tend to come in around £50 to £100 or more than the boards we’re seeing in the Z370 range. You do of course get extra memory slots in the X299 range, but then you can still mount 64GB on the mid-range board which for a lot of users is likely to be enough for the lifecycle of any new machine.

You also get an onboard GPU solution with the 8700K and if anything has been proven over the recent Intel generations, its that those onboard GPU solutions they offer are pretty good in the studio these days, perhaps also offering additional value to any new system build.

Grinding Out A Conclusion

I’m sure pricing from both sides will be competitive over the coming months as they aim to steal market share from each other. So with that in mind, it’s handy to keep these metrics in mind, along with the current market pricing at your time of purchase in order to make your own informed choice. I will say that at this point Intel has done well to reposition themselves after AMD’s strongest year in a very long time, although really their biggest achievement here looks to have been cannibalizing part of their own range in the process. 

That, of course, is by no means is a complaint as when pricing is smashed like this then the biggest winner out there is the buying public and that truly is a marvelous thing. Comparing the 8700K to the 7700K on Geekbench alone shows us a 50% improvement in performance overheads for a tiny bit more than the previous generation cost, which frankly is the sort of generation on generation improvement that we would all like to be seeing every couple of years, rather than the 10% extra every generation we’ve been seeing of late.

Whether you choose to go with an Intel or an AMD for your next upgrade, we’ve seen that the performance gains for your money are likely to be pretty great this time around on both platforms. If your current system is more than 3 or 4 years old then it’s even more likely that the will be a pretty strong upgrade path open to you when you do finally choose to take that jump. With hints of Ryzen 2 being on its way next year from AMD and the likelihood that Intel would never leave any new release unchallenged, we could be in for an interesting 2018 too!

All DAWBench Testing

3XS Audio Systems @ Scan

The Intel i9 7920X On The Bench

Back in June this year we took a look at the first i9 CPU model with the launch of the i9 7900X. Intel has since followed on from that with the rest of the i9 chips receiving a paper launch back in late August and with the promise of those CPU’s making it into the publics hands shortly afterward. Since then we’ve  seen the first stock start to arrive with us here in Scan and we’ve now had a chance to sit down and test the first of this extended i9 range in the shape of the i9 7920X.

The CPU itself is 12 cores along with hyper-threading, offering us a total of 24 logical cores to play with. The base clock of the chip is 2.9GHz and a max turbo frequency of 4.30GHz with a reported 140W TDP which is much in line with the rest of the chips below it in the enthusiast range.  Running at that base clock speed the chip is 400MHz slower per core than the 10 core edition 7900X. So if you add up all the available cores running at those clock speeds (12 X 2900 vs 10 X 3300) and compare the two chips on paper, then the looks to be less than 2GHz total available overhead separating them but still in the 7920X’s favor. 

So looking at it that way, why would you pay the premium £200 for the 12 core? Well interestingly both CPU’s claim to be able to turbo to the same max clock rating of 4.3GHz, although it should be noted that turbo is designed to factor in power usage and heat generation too, so if your cooling isn’t up to the job then you shouldn’t expect it to be hitting such heady heights constantly and whilst I’m concerned that I may be sounding like a broken record by this point, as with all the high-end CPU releases this year you should be taking care with your cooling selection in order to ensure you get the maximum amount of performance from your chip.

Of course, the last thing we want to see is the power states throttling the chip in use and hampering our testing, so as always we’ve ensured decent cooling but aimed to keep the noise levels reasonable where we can. Normally we’d look to tweak it up to max turbo and lock it off, whilst keeping those temperatures in check and ensuring the system will be able to deliver a constant performance return for your needs.

However, in this case, I’ve not taken it quite all the way to the turbo max, choosing to keep it held back slightly at 4.2GHz across all cores. I was finding that the CPU would only ever bounce of 4.3GHz when left to work under its own optimized settings and on the sort of air cooling we tend to favour it wouldn’t quite maintain the 4.3GHz that was achieved with the 7900X in the last round of testing without occasionally throttling back. It will, however, do it on an AIO water loop cooler, although you’re adding another higher speed fan in that scenario and I didn’t feel the tradeoff was worth it personally, but certainly worth considering for anyone lucky to have a separate machine and control room where a bit more noise would go unnoticed.

Just as a note at this point, if you run it at stock and let it work its own turbo settings then you can expect an idle temperature around 40 degrees and under heavy load it still should be keeping it under 80 degrees on average which is acceptable and certainly better than we suspected around the time of the 7900X launch. However, I was seeing the P-states raising and dropping the core clock speeds in order to keep its power usage down and upon running Geekbench and comparing the results that my 4.2GHz on all cores setting gave us an additional 2000 points (around 7% increase) over the turbo to 4.3GHz default setting found in the stock configuration. My own temps idled in the 40’s and maxed around 85 degrees whilst running the torture tests for an afternoon, so for a few degrees more you can ensure that you get more constant performance from the setup.

Also worth noting is that we’ve had our CAD workstations up to around 4.5GHz and higher in a number of instances although in those instances we’re talking about a  full water loop and a number of extra fans to maintain stability under that sort of workload, which wouldn’t be ideal for users working in close proximity to a highly sensitive mic. 

Ok, so first up the CPUz information for the chip at hand, as well it’s Geelbench results.


7920X CPUz
CPUz 42Ghz bench7920X Geekbench 4

More importantly for this comparison is the Geekbench 4 results and to be frank it’s all pretty much where we’d expect it to be in this one.

7920X geekbench 4 Chart
Click to expand.

The single core score is down compared with the 7900X, but we’d expect this given the 4.2GHz clocking of the chip against the 4.3GHz 7900X. The multicore score is similarly up, but then we have a few more cores so all in all pretty much as expected here.

Dawbench DSP 7920X
Click to Expand
Dawbench 6
Click to Expand

On with the DAWBench tests and again, no real surprises here. I’d peg it at being around an average of 10% or so increase over the 7900X which given we’re just stacking more cores on the same chip design really shouldn’t surprise us at all. It’s a solid solution and certainly the highest benching we’ve seen so far barring the models due to land above it. Bang per buck it’s £1020 price tag when compared to the £900 for the 10 core edition it seems to perform well on the Intel price curve and it looks like the wider market situation has curbed some of the price points we might have otherwise seen these chips hit. 

And that’s the crux of it right now. Depending on your application and needs the are solutions from both sides that might fit you well. I’m not going to delve too far into discussing the value of the offerings that are currently available as prices do seem to be in flux to some degree with this generation. Initially, when it was listed we were discussing an estimated price of £100 per core and now we seem to be around £90 per core at the time of writing which seems to be a positive result for anyone wishing to pick one up.

Of course, the benchmarks should always be kept in mind along with that current pricing and it remains great to see continued healthy competition and I suspect with the further chips still to come this year, we may still see some additional movement before the market truly starts to settle after what really has been a release packed 12 months.

The 3XS Systems Selection @ Scan

First look at the AMD Threadripper 1920X & 1950X

Another month and another chip round up, with them still coming thick and fast, hitting the shelves at almost an unprecedented rate.

AMD’s Ryzen range arrived with us towards the end of Q1 this year and its impact upon the wider market sent shockwaves through computer industry for the first time for in well over the decade for AMD.

Although well received at launch, the Ryzen platform did have the sort of early teething problems that you would expect from any first generation implementation of a new chipset range. Its strength was that it was great for any software that could effectively leverage the processing performance on offer across the multitude of cores that were being made available. The platform whilst perfect for a great many tasks across any number of market segments did also have its inherent weaknesses too which would crop up in various scenarios with one such field where its design limitations being apparent being real-time audio.

Getting to the core of the problem.

The one bit of well meaning advice that drives system builders up the wall and that is the “clocks over cores” wisdom that has been offered up by DAW software firms since what feels like the dawn of time. It’s a double edged sword in that it tries to simplify a complicated issue without ever explaining why or in what situations it truly matters.

To give a bit of crucial background information as to why this might be we need to start from the point of view that your DAW software is pretty lousy for parallelization. 

That’s it, the dirty secret. The one thing computers are good at are breaking down complex chains of data for quick and easy processing except in this instance not so much.

Audio works with real-time buffers. Your ASIO drivers have those 64/128/256 buffer settings which are nothing more than chunks of time where the data is captured entering the system and held in a buffer until it is full, before being passed over to the CPU to do its magic and get the work done.

If the workload is processed before the next buffer is full then life is great and everything is working as intended. If however the buffer becomes full prior to the previous batch of information being dealt with, then data is lost and this translates to your ears as clicks and pops in the audio.

Now with a single core system, this is straight forward. Say you’re working with 1 track of audio to process with some effects. The whole track would be sent to the CPU, the CPU processes the chain and spits out some audio for you to hear. 

So far so easy.

Now say you have 2 or 3 tracks of audio and 1 core. These tracks will be processed on the available core one at a time and assuming all the tracks in the pile are processed prior to the buffer reset then we’re still good. In this instance by having a faster core to work on, more of these chains can be processed within the buffer time that has been allocated and more speed certainly means more processing being done in this example.

So now we consider 2 or more core systems. The channel chains are passed to the cores as they become available and the once more the whole channel chain is processed on a single core.  

Why?

Because to split the channels over more than one core would require us to divide up the work load and then recombine it all again post processing, which for real-time audio would leave us with other components in the chain waiting for the data to be shuttled back and forth between the cores. All this lag means we’d lose processing cycles as that data is ferried about, meaning we’d continue to lose more performance with each and every added core something I will often refer to as processing overhead.

Clock watching

Now the upshot of this means that lower clocked chips can often be more inefficient than higher clocked chips, especially with newer, more demanding software. 

So for just for an admittedly extreme example, say that you have the two following chips.

CPU 1 has 12 cores running at 2GHz

CPU 2 has 4 cores running at 4Ghz

The maths looks simple, 2 X 12 beats 4 X 4 on paper, but in this situation, it comes down to software and processing chain complexity. If you have a particularly demanding plugin chain that is capable of overloading one of those 2GHz CPU cores, then the resulting glitching will proceed to ruin the output from the other 11 cores.

In this situation the more overhead you have to play with overall on each core, the less chance the is that an overly demanding plugin is going to be able to sink to the lot in use.

This is also one of the reasons we tend to steer clear of single server CPU’s with high core counts and low clock speeds and is largely what the general advice is referring too. 

On the other hand when we talk about 4 core CPU’s at 4GHz vs 8 core CPU’s at 3.5GHz, in this example the difference between them in clock speeds isn’t going to be enough to cause problems with even the busiest of chains, and once that is the case then more cores on a single chip tend to become more attractive propositions as far as getting out the best performance is concerned.

Seeing Double

So with that covered, we’ll quickly cover the other problematic issue with working with server chips which is the data exchange process between memory banks. 

Dual chip systems are capable of offering the ultimate levels of performance this much is true, but we have to remember that returns on your investment diminish quickly as we move through the models. 

Not only do we have the concerns outlined above about cores and clocks, but when you move to dealing with more than one CPU you have to start to consider “NUMA”  (Non-uniform memory access) overheads caused by using multiple processors. 

CPU’s can exchange data between themselves via high-speed connections and in AMD’s case, this is done via an extension to the Infinity Fabric design that allows the quick exchange of data between the cores both on and off the chip(s). The memory holds data until it’s needed and in order to ensure the best performance from a CPU they try and store the data held in memory on the physical RAM stick nearest to the physical core.  By keeping the distance between them as short as possible, they ensure the least amount of lag in information being requested and with it being received.

This is fine when dealing with 1 CPU and in the event that a bank of RAM is full, then moving and rebalancing the data across other memory banks isn’t going to add too much lag to the data being retrieved. However when you add a second CPU to the setup and an additional set of memory banks, then you suddenly find yourself trying to manage the data being sent and called between the chips as well as the memory banks attached. In this instance when a RAM bank is full then it might end up bouncing the data to free space on a bank connected to the other CPU in the system, meaning the data may have to travel that much further across the board when being accessed. 

As we discussed in the previous section any wait for data to be called can cause inefficiencies where the CPU has to wait for the data to arrive. All this happens in microseconds but if this ends up happening hundreds of thousands of times every second our ASIO meter ends up looking like its overloading due to lagged data being dropped everywhere, whilst our CPU performance meter may look like it’s only being half used at the same time.

This means that we do tend to expect there to be an overhead when dealing with dual chip systems. Exactly how much depends on entirely on what’s being run on each channel and how much data is being exchanged internally between those chips but the take home is that we expect to have to pay a lot more for server grade solutions that can match the high-end enthusiast class chips that we see in the consumer market, at least when it comes to situations where real-time related workloads are crucial like dealing with ASIO based audio. It’s a completely different scenario when you deal with another task like off line rendering for video where the processor and RAM is being system managed on its own time and working to its own rules, server grade CPU options here make a lot of sense and are very, very efficient.

To server and protect

So why all the server background when we’re looking at desktop chips today? Indeed Threadripper has been positioned as AMD’s answer to Intel’s enthusiast range of chips and largely a direct response to the i7  and i9 7800X, 7820X and 7900X chips that launched just last month with AMD’s Epyc server grade chips still sat in waiting.

An early de-lidding of the Threadripper series chips quickly showed us that the basis of the new chips is two Zen CPU’s connected together. Thanks to the “Infinity Fabric” core interconnect design it makes it easy for them to add more cores and expand these chips up through the range; indeed their server solution EPYC is based on the same “Zen” building blocks at its heart as both Ryzen and Threadripper with just more cores piled in there.

Knowing this before testing it gave me some certain expectations going in that I wanted to examine. The first being Ryzens previously inefficient core handling when dealing with low latency workloads, where we established in the earlier coverage that the efficiency of the processor at lower buffer settings would suffer. 

This I suspected was an example of data transference lag between cores and at the time of that last look we weren’t certain how constant this might have proven to be across the range. Without having more experience of the platform we didn’t know if this was something inherent to the design or if perhaps it might be solved in a later update. As we’ve seen since its launch and having checked over other CPU’s in testing this performance scaling seems to be a constant across all the chips we’ve seen so far and something that certainly can be constantly replicated.

Given that it’s a known constant to us now in how it behaves, we’re happy that isn’t further hidden under-laying concerns here. If the CPU performs as you require at the buffer setting that you need it to handle then that is more than good enough for most end users. The fact that it balances out around the 192 buffer level on Ryzen where we see 95% of the CPU power being leveraged means that for  plenty of users who didn’t have the same concerns with low latency performance such as those mastering guys who work at higher buffer settings, meant that for some people this could still be good fit in the studio.

However knowing about this constant performance response at certain buffer settings made me wonder if this would carry across to Threadripper. The announcement that this was going to be 2 CPU’s connected together on one chip then raised my concerns that this was going to experience the same sort of problems that we see with Xeon server chips as we’d take a further performance hit through NUMA overheads. 

So with all that in mind, on with the benchmarks…

On your marks

I took a look at the two Threadripper CPU’s available to us at launch.

The flagship 1950X features 16 cores and a total of 32 threads and has a base clock of 3.4GHz and a potential turbo of 4GHz.

CPUz AMD 1950x
CPUz Details for the 1950X
CPU z AMD 1950x benchmark
CPUz details for the 1920X

 

Along with that I also took a look at the 1920X is a 12 core with 24 threads which has a base clock speed of 3.5GHz and an advised potential turbo clock of 4GHz.

CPUz AMD 1920XCPUz AMD 1920X benchmark

First impressions weren’t too dissimilar to when we looked at the Intel i9 launch last month. These chips have a reported 180W TDP at stock settings placing them above the i9 7900X with its purported 140W TDP.

Also much like the i9’s we’ve seen previously it fast became apparent that as soon as you start placing these chips under stressful loads you can expect that power usage to scale up quickly, which is something you need to keep in mind with either platform where the real term power usage can rapidly increase when a machine is being pushed heavily.

History shows us that every time CPU war starts, the first casualty is often your system temperatures as the easiest way to increase a CPU’s performance quickly is to simply ramp the clock speeds, although often this will also be a  cause of an exponential amount of heat then being dumped into the system because of it. We’ve seen a lot of discussion in recent years about the “improve and refine” product cycles with CPU’s where a new tech in the shape of a die shrink is introduced and then refined over the next generation or two as temperatures and power usage is reduced again, before starting the whole cycle again.

What this means is that with the first generation of any CPU we don’t always expect a huge overclock out of it, and this is certainly the case here. Once again for contrast the 1950X, much like the i9 7900X is running hot enough at stock clock settings that even with a great cooler it’s struggling to reach the limit of its advised potential overclock.

Running with a Corsair H110i cooler the chip only seems to hold a stable clock around the 3.7GHz level without any problems. The board itself ships with a default 4GHz setting which when tried would reset the system whilst running the relatively lightweight Geekbench test routine. I tried to setup a working overclock around that level, but the P-states would quickly throttle me back once it went above 3.8GHz leaving me to fall back to the 3.7GHz point. This is technically an overclock from the base clock but doesn’t meet the suggested turbo max of 4GHz, so the take home is that you should make sure that you invest in great cooling when working with one of these chips.

Geekout

Speaking of Geekbench its time to break that one out.

Geekbench 4 1950X stock Geekbench 4 1720X stock

I must admit to having expected more from the multi-core score, especially on the 1950X, even to the point in double checking the results a number of times. I did take a look at the published results on launch day and I saw that my own scores were pretty much in-line with the other results there at the time. Even now a few days later it still appears to be within 10% of the best results for the chip results published, which says to me that some people do look to have got a bit of an overclock going on with their new setups, but we’re certainly not going to be seeing anything extreme anytime soon.

 

Geekbench 4 Threadripper
Click to expand Geekebench Results

When comparing the Geekbench results to other scores from recent chip coverage it’s all largely as we’d expect with the single core scores. A welcome improvement from the Ryzen 1700Xs, they’ve clearly done some fine tuning to the tech under the hood as the single core score has seen gains of around 10% even whilst running at a slightly slow per core clock. 

One thing I will note at this point is that I was running with 3200MHz memory this time around. The were reports after the Ryzen launch that running with higher clocked memory could help improve the performance of the CPU’s in some scenarios and it’s possible that the single core clock jump we’re seeing might prove to be down as much to the increase in memory clocks as anything else. A number of people have asked me if this impacts audio performance at all, and I’ve done some testing with the production run 1800X’s and 1700X’s in the months since but haven’t seen any benefits to raising the memory clock speeds for real time audio handling. 

We did suspect this would be the outcome as we headed into testing, as memory for audio has been faster than it needs to be for a long time now, although admittedly it was great to revisit it once more and make sure. As long as the system RAM is fast enough to deal with that ASIO buffer, then raising the memory clock speed isn’t going to improve the audio handling in a measurable fashion.

The multicore results show the new AMD’s slotted in between the current and last generation Intel top end models. Whilst the AMD’s have made solid performance gains over earlier generations it has still be widely reported that their IPC scores (Instructions per clockcycle) are still behind the sort of results returned by the Intel chips.

Going back to our earlier discussion about how much code you can action on any given CPU core within a ASIO buffer cycle, the key to this is the IPC capability. The quicker the code can be actioned, then the more efficently your audio gets processed and so more you can do overall. This is perhaps the biggest source of confusion when people quote “clocks over core” as rarely are any two CPU’s comparable on clock speeds alone ,and a chip that has a better IPC performance can often outperform other CPU’s with higher quoted per clock frequencies but a lower IPC score. 

….And GO!

So lengthy explanations aside, we get to the crux of it all.

Much like the Ryzen tests before it, the Threadrippers hold up well in the older DawBench DSP testing run.

DawBench DSP Threadripper
Click To Expand

Both of the chips show gains over the Intel flagship i9 7900X and given this test uses a single plugin with stacked instances of it and a few channels of audio, what we end up measuring here is raw processor performance by simply stacking them high and letting it get on with it.

The is no disputing here that the is a sizable slice of performance to be had. Much like our previous coverage, however, it starts to show up some performance irregularities when you examine other scenarios such as the more complex Kontakt based test DawBenchVI.

DawBench VI Threadripper
Click To Expand

The earlier scaling at low buffer settings is still apparent this time around, although it looks to have been compounded by the hard NUMA addressing that is in place due to the multi chip in one die design that is in use. It once more scales upwards as the buffer is slackened off but even at the 512 buffer setting which I tested, it could only achieve 90% of CPU use under load.

That to be fair to it, is very much what I would expect from any server CPU based system. In fact, just on its own, the memory addressing here seems pretty capable when compared to some of the other options I’ve seen over the years, it’s just a shame that the other performance response amplifies the symptoms when the system is stressed.

AMD to their credit is perfectly aware of the pitfalls of trying to market what is essentially a server CPU setup to an enthusiast market. Their Windows overclocking tool has various options to set up some control and optimize how it deals with NUMA and memory address as you can see below.

AMD Control Panel
Click To Enlarge

I did have a fiddle around with some of the settings here and the creators mode did give me some marginal gains over the other options thanks to it appearing to arrange the memory in a well organized and easy to address logical group, but ultimately the performance dips we’re seeing are down to a physical addressing issue, in that data has to be moved from X to Y in a given time frame and no amount of software magic will be able to resolve this for us I suspect.

Conclusion

I think this one is pretty straight forward if you need to be running at below a 256 ASIO buffer, although there are certainly some arguments for mastering guys who don’t need that sort of response.

Much like the Intel i9’s before it, however, the is a strong suggestion that you really do need to consider your cooling carefully here. The normal low noise high-end air coolers that I tend to favour for testing were largely overwhelmed once I placed these on the bench and once the heat started to climb the water cooler I was using had both fans screaming.

Older readers with long memories might have a clear recollection of the CPU wars that gave us P4’s, Prescott’s, Athlon FX’s and 64’s. We saw both of these firms in a CPU arms race that only really ended when the i7’s arrived with the X58 chipset. Over the years this took place we saw ever raising clock speeds, a rapid release schedule of CPU’s and constant gains, although at the cost of heat and ultimately noise levels. In the years since we’ve had refinement and a vast reduction of heat and noise, but little as far as performance advancements, at least over the last 5 or 6 generations.

We finally have some really great choices from both firms and depending on your exact needs and price points you’re working at the could be arguments in each direction. Personally, I wouldn’t consider server class chips to be ultimate solution in the studio from either firm currently, not unless you’re prepared to spend the sort of money that the tag “ultimate” tends to reflect, in which case you really won’t get anything better.  

In this instance, if you’re doing a load of multimedia work alongside mastering for audio, this platform could fit your requirements well, but for writing and editing some music I’d be looking towards one of the other better value solutions unless this happens to fit your niche.

To see our custom PC selection @ Scan

To see our fixed series range @ Scan

Casting an eye over the Intel i7 Skylake X editions.

Following on from our first look at the i9 7900X, we’ve now had a chance to take a look over a few more interesting chips from this enthusiast class range refresh. 

We have before us today two more chips with the first being the i7 7800X which is the replacement for the older 6800K, once more offering us 6 physical cores with hyper-threading giving us a total of 12 logical cores to play with. It’s running a 3.5GHz base clock and features an all core turbo of 4GHz although being the 6 core it offers us the most potential to overclock we’ve seen within this range.

The second chip we have here is the 7820X and on paper it looks to be the most interesting one for me on this generation due to its price to performance ratio. Replacing the 6900K from the previous generation but coming in for around £350 less, this chip offers 2 more cores and a higher all core turbo rating along with a 1/3rd more cache than the 7800X edition.

For reference the current price at time of writing for the 7800X is £359 and the 7820X currently retails for £530.

I’m not going to go too much into the platform itself this time around, I gave some background to the changes made on this generation including possible strengths and flaws back in the i9 7900K first look over here. If you haven’t already checked that out and wish to bring yourself up to speed, now is the time to do so before we go any further.

Everyone up to speed? Then let us begin.

The Long Hot Summer

The first question I had from the off was one of how are these going to handle given the heat we saw with the 10 core? The quick answer is surprisingly well compared to the earlier testing we carried out. The retail releases I’ve been playing around with here are allowing us to drop the voltages on them to almost half the level that we expected to see with the previous generation and certainly a  few notches lower than we saw in the earlier testing we carried out.

So whilst I did hope for some marked improvements on the final release I didn’t quite expect to see it quite so quickly, normally these sorts of improvements take a few months of manufacturing refinement to appear and its great we’re seeing this right now. It certainly gives me some confidence that we’ll be seeing improvements across the range over the coming batches and I’m now far more confident that the larger i9’s that they have already announced should hold up well when they do finally arrive with us in the future.

 If I was to give a rough outline of the state of these Skylakes i7’s I’d say they are still running maybe 10% hotter than the last generation Broadwell-E clock for clock. However Intel has these designed to throttle at 105 degrees, essentially giving it 10% more overhead to play with so they do seem to be confident in these solutions running that much hotter in use over the longer term.

One thing I noted in testing was that we were seeing a lot of micro-fluctuations across the cores when load testing. By that I mean we’d see temperatures bouncing up and down by anything up to 6 or 7 degrees as we tested, but never on more than a core or two at the time and it would be pulled straight back down again moments later only for another core to fluctuate and so on.

Behind this is Intels new PCU (Package Control Unit) that has been added to Skylake X series, and whilst I did note the ability to turn it off inside of the BIOS by doing so we’d also see some additional rise in the temperatures with it disabled. One of the strengths of the PCU and these new P-States appears to be the ability to load manage well and it actively aims to offer the smoothest experience as far as power saving goes. It’s certainly welcome as it does seem to offer more control over the allocation of system performance and doesn’t appear to be causing the same sort of C-State issues we saw when that first appeared so this looks to be another welcome feature addition at this time.

Once again we’re seeing the same sort of 99% CPU load efficiency across the board as we saw when testing in Cubase on the 7900X. This I suspect is in no small part down to the board and CPU trying their hardest to strike that power to performance balance I mention above and is great to see.

Hit The Bench

On to the figures then and first up the standard synthetics in the shape of Geekbench 4 and the CPU-Z benchmark.

7800K CPU-Z 4 @ 4.4GHz

7800X CPUZ test

7800K Geekbench 4 @ 4.4GHz

Geekbench 4 7800K

The obvious comparison here it to line it up against the previous generations 6 core solution. The 6800K saw Geekbench single core scores in the region of 4400 and multi core scores around the 20500 mark, meaning that these results are sitting in the 10% – 15% increase range which is pretty much where we expect a new generation to be.

7820K CPU-Z 4 @ 4.3GHz

i7 7820X CPUz

7820K Geekbench 4 @ 4.3GHz

7820X Geek4

In a similar fashion we can take a look at the last generation 6900K which had a Geekbench score in the 4200 range and the multi-core was sitting around the 25000 level. Once again we’re looking at around a 10% gain in these synthetics, which is pretty much in line with what we’d expect.

Hold the DAW

So far, so expected and to be honest the isn’t any real surprises to be had here as we start with the DAWBench DSP test.

Skylake i7 Dawbench 4

With the 7800X can see small gains over the previous 6800K chip which is just short of the 10% mark so even perhaps just a little lower than we would have expected. In fact in this test the 7820X offers similar modest gains over the older 6900K model and doesn’t do much to surprise here us here either.

7900x DawbenchVi

The DAWbench VI test tells a similar story at the lowest buffer setting with the 7800X and 7820X both sitting roughly where we expect. What proves to be the one point of interest beyond this however is that both chips scale better than their previous iterations once you move up to the larger buffer sizes. Whilst testing these chips much like the high-end 7900K, we saw them managing to hit CPU loads around the 99% mark, but you can see that each chip scaled upwards with better results overall when compared not only with their previous edition but also when placed up against the chip above them in the previous range. 

We saw a similar pattern with the Ryzen chips too and their infinity fabric design is similar in practice mesh design found in the Skylake X CPU’s. The point of these newer mesh style designs are to improve data transference within the CPU and allow for improved performance scalability, so with both firms looking to be moving firmly in this direction we can expect to see further optimizations from software developers in the future that should continue to benefit both platforms moving forward.

Conclusion

Looking towards the future and the are already plenty of rumours already circulating regarding the expectation of a “Coffee Lake” refresh coming next. This includes a new mid-range flagship that is shaping up to offer us a contender against the 7800X and might prove to be an interesting option for anyone looking for a new system around that level, but doesn’t currently find themselves needing to pick up a new system right away.

Also we’re expecting Threadripper to arrive with us over the next few months which is no doubt the comparison that a lot of people will be waiting on. It’ll be interesting to see if the scaling characteristics that were first exhibited by Ryzen get translated across to this newer platform.  

The entry level enthusiast chips have long  proven to be the sweet spot for those seeking the best returns on the performance to value curve when considering Intel CPU’s.  This time around however whilst the 7800X is a solid chip in its own right, it’s looking like the the extra money for the 7820X  could well offer a stronger bang per buck option for those looking to invest in a system around this level. 

Click here to can see the full range of Scan 3XS Audio Systems