Category Archives: Computer Music – Hardware

First look at the AMD Threadripper 1920X & 1950X

Another month and another chip round up, with them still coming thick and fast, hitting the shelves at almost an unprecedented rate.

AMD’s Ryzen range arrived with us towards the end of Q1 this year and its impact upon the wider market sent shockwaves through computer industry for the first time for in well over the decade for AMD.

Although well received at launch, the Ryzen platform did have the sort of early teething problems that you would expect from any first generation implementation of a new chipset range. Its strength was that it was great for any software that could effectively leverage the processing performance on offer across the multitude of cores that were being made available. The platform whilst perfect for a great many tasks across any number of market segments did also have its inherent weaknesses too which would crop up in various scenarios with one such field where its design limitations being apparent being real-time audio.

Getting to the core of the problem.

The one bit of well meaning advice that drives system builders up the wall and that is the “clocks over cores” wisdom that has been offered up by DAW software firms since what feels like the dawn of time. It’s a double edged sword in that it tries to simplify a complicated issue without ever explaining why or in what situations it truly matters.

To give a bit of crucial background information as to why this might be we need to start from the point of view that your DAW software is pretty lousy for parallelization. 

That’s it, the dirty secret. The one thing computers are good at are breaking down complex chains of data for quick and easy processing except in this instance not so much.

Audio works with real-time buffers. Your AISO drivers have those 64/128/256 buffer settings which are nothing more than chunks of time where the data is captured entering the system and held in a buffer until it is full, before being passed over to the CPU to do its magic and get the work done.

If the workload is processed before the next buffer is full then life is great and everything is working as intended. If however the buffer becomes full prior to the previous batch of information being dealt with, then data is lost and this translates to your ears as clicks and pops in the audio.

Now with a single core system, this is straight forward. Say you’re working with 1 track of audio to process with some effects. The whole track would be sent to the CPU, the CPU processes the chain and spits out some audio for you to hear. 

So far so easy.

Now say you have 2 or 3 tracks of audio and 1 core. These tracks will be processed on the available core one at a time and assuming all the tracks in the pile are processed prior to the buffer reset then we’re still good. In this instance by having a faster core to work on, more of these chains can be processed within the buffer time that has been allocated and more speed certainly means more processing being done in this example.

So now we consider 2 or more core systems. The channel chains are passed to the cores as they become available and the once more the whole channel chain is processed on a single core.  

Why?

Because to split the channels over more than one core would require us to divide up the work load and then recombine it all again post processing, which for real-time audio would leave us with other components in the chain waiting for the data to be shuttled back and forth between the cores. All this lag means we’d lose processing cycles as that data is ferried about, meaning we’d continue to lose more performance with each and every added core something I will often refer to as processing overhead.

Clock watching

Now the upshot of this means that lower clocked chips can often be more inefficient than higher clocked chips, especially with newer, more demanding software. 

So for just for an admittedly extreme example, say that you have the two following chips.

CPU 1 has 12 cores running at 2GHz

CPU 2 has 4 cores running at 4Ghz

The maths looks simple, 2 X 12 beats 4 X 4 on paper, but in this situation, it comes down to software and processing chain complexity. If you have a particularly demanding plugin chain that is capable of overloading one of those 2GHz CPU cores, then the resulting glitching will proceed to ruin the output from the other 11 cores.

In this situation the more overhead you have to play with overall on each core, the less chance the is that an overly demanding plugin is going to be able to sink to the lot in use.

This is also one of the reasons we tend to steer clear of single server CPU’s with high core counts and low clock speeds and is largely what the general advice is referring too. 

On the other hand when we talk about 4 core CPU’s at 4GHz vs 8 core CPU’s at 3.5GHz, in this example the difference between them in clock speeds isn’t going to be enough to cause problems with even the busiest of chains, and once that is the case then more cores on a single chip tend to become more attractive propositions as far as getting out the best performance is concerned.

Seeing Double

So with that covered, we’ll quickly cover the other problematic issue with working with server chips which is the data exchange process between memory banks. 

Dual chip systems are capable of offering the ultimate levels of performance this much is true, but we have to remember that returns on your investment diminish quickly as we move through the models. 

Not only do we have the concerns outlined above about cores and clocks, but when you move to dealing with more than one CPU you have to start to consider “NUMA”  (Non-uniform memory access) overheads caused by using multiple processors. 

CPU’s can exchange data between themselves via high-speed connections and in AMD’s case, this is done via an extension to the Infinity Fabric design that allows the quick exchange of data between the cores both on and off the chip(s). The memory holds data until it’s needed and in order to ensure the best performance from a CPU they try and store the data held in memory on the physical RAM stick nearest to the physical core.  By keeping the distance between them as short as possible, they ensure the least amount of lag in information being requested and with it being received.

This is fine when dealing with 1 CPU and in the event that a bank of RAM is full, then moving and rebalancing the data across other memory banks isn’t going to add too much lag to the data being retrieved. However when you add a second CPU to the setup and an additional set of memory banks, then you suddenly find yourself trying to manage the data being sent and called between the chips as well as the memory banks attached. In this instance when a RAM bank is full then it might end up bouncing the data to free space on a bank connected to the other CPU in the system, meaning the data may have to travel that much further across the board when being accessed. 

As we discussed in the previous section any wait for data to be called can cause inefficiencies where the CPU has to wait for the data to arrive. All this happens in microseconds but if this ends up happening hundreds of thousands of times every second our ASIO meter ends up looking like its overloading due to lagged data being dropped everywhere, whilst our CPU performance meter may look like it’s only being half used at the same time.

This means that we do tend to expect there to be an overhead when dealing with dual chip systems. Exactly how much depends on entirely on what’s being run on each channel and how much data is being exchanged internally between those chips but the take home is that we expect to have to pay a lot more for server grade solutions that can match the high-end enthusiast class chips that we see in the consumer market, at least when it comes to situations where real-time related workloads are crucial like dealing with ASIO based audio. It’s a completely different scenario when you deal with another task like off line rendering for video where the processor and RAM is being system managed on its own time and working to its own rules, server grade CPU options here make a lot of sense and are very, very efficient.

To server and protect

So why all the server background when we’re looking at desktop chips today? Indeed Threadripper has been positioned as AMD’s answer to Intel’s enthusiast range of chips and largely a direct response to the i7  and i9 7800X, 7820X and 7900X chips that launched just last month with AMD’s Epyc server grade chips still sat in waiting.

An early de-lidding of the Threadripper series chips quickly showed us that the basis of the new chips is two Zen CPU’s connected together. Thanks to the “Infinity Fabric” core interconnect design it makes it easy for them to add more cores and expand these chips up through the range; indeed their server solution EPYC is based on the same “Zen” building blocks at its heart as both Ryzen and Threadripper with just more cores piled in there.

Knowing this before testing it gave me some certain expectations going in that I wanted to examine. The first being Ryzens previously inefficient core handling when dealing with low latency workloads, where we established in the earlier coverage that the efficiency of the processor at lower buffer settings would suffer. 

This I suspected was an example of data transference lag between cores and at the time of that last look we weren’t certain how constant this might have proven to be across the range. Without having more experience of the platform we didn’t know if this was something inherent to the design or if perhaps it might be solved in a later update. As we’ve seen since its launch and having checked over other CPU’s in testing this performance scaling seems to be a constant across all the chips we’ve seen so far and something that certainly can be constantly replicated.

Given that it’s a known constant to us now in how it behaves, we’re happy that isn’t further hidden under-laying concerns here. If the CPU performs as you require at the buffer setting that you need it to handle then that is more than good enough for most end users. The fact that it balances out around the 192 buffer level on Ryzen where we see 95% of the CPU power being leveraged means that for  plenty of users who didn’t have the same concerns with low latency performance such as those mastering guys who work at higher buffer settings, meant that for some people this could still be good fit in the studio.

However knowing about this constant performance response at certain buffer settings made me wonder if this would carry across to Threadripper. The announcement that this was going to be 2 CPU’s connected together on one chip then raised my concerns that this was going to experience the same sort of problems that we see with Xeon server chips as we’d take a further performance hit through NUMA overheads. 

So with all that in mind, on with the benchmarks…

On your marks

I took a look at the two Threadripper CPU’s available to us at launch.

The flagship 1950X features 16 cores and a total of 32 threads and has a base clock of 3.4GHz and a potential turbo of 4GHz.

CPUz AMD 1950x
CPUz Details for the 1950X
CPU z AMD 1950x benchmark
CPUz details for the 1920X

 

Along with that I also took a look at the 1920X is a 12 core with 24 threads which has a base clock speed of 3.5GHz and an advised potential turbo clock of 4GHz.

CPUz AMD 1920XCPUz AMD 1920X benchmark

First impressions weren’t too dissimilar to when we looked at the Intel i9 launch last month. These chips have a reported 180W TDP at stock settings placing them above the i9 7900X with its purported 140W TDP.

Also much like the i9’s we’ve seen previously it fast became apparent that as soon as you start placing these chips under stressful loads you can expect that power usage to scale up quickly, which is something you need to keep in mind with either platform where the real term power usage can rapidly increase when a machine is being pushed heavily.

History shows us that every time CPU war starts, the first casualty is often your system temperatures as the easiest way to increase a CPU’s performance quickly is to simply ramp the clock speeds, although often this will also be a  cause of an exponential amount of heat then being dumped into the system because of it. We’ve seen a lot of discussion in recent years about the “improve and refine” product cycles with CPU’s where a new tech in the shape of a die shrink is introduced and then refined over the next generation or two as temperatures and power usage is reduced again, before starting the whole cycle again.

What this means is that with the first generation of any CPU we don’t always expect a huge overclock out of it, and this is certainly the case here. Once again for contrast the 1950X, much like the i9 7900X is running hot enough at stock clock settings that even with a great cooler it’s struggling to reach the limit of its advised potential overclock.

Running with a Corsair H110i cooler the chip only seems to hold a stable clock around the 3.7GHz level without any problems. The board itself ships with a default 4GHz setting which when tried would reset the system whilst running the relatively lightweight Geekbench test routine. I tried to setup a working overclock around that level, but the P-states would quickly throttle me back once it went above 3.8GHz leaving me to fall back to the 3.7GHz point. This is technically an overclock from the base clock but doesn’t meet the suggested turbo max of 4GHz, so the take home is that you should make sure that you invest in great cooling when working with one of these chips.

Geekout

Speaking of Geekbench its time to break that one out.

Geekbench 4 1950X stock Geekbench 4 1720X stock

I must admit to having expected more from the multi-core score, especially on the 1950X, even to the point in double checking the results a number of times. I did take a look at the published results on launch day and I saw that my own scores were pretty much in-line with the other results there at the time. Even now a few days later it still appears to be within 10% of the best results for the chip results published, which says to me that some people do look to have got a bit of an overclock going on with their new setups, but we’re certainly not going to be seeing anything extreme anytime soon.

 

Geekbench 4 Threadripper
Click to expand Geekebench Results

When comparing the Geekbench results to other scores from recent chip coverage it’s all largely as we’d expect with the single core scores. A welcome improvement from the Ryzen 1700Xs, they’ve clearly done some fine tuning to the tech under the hood as the single core score has seen gains of around 10% even whilst running at a slightly slow per core clock. 

One thing I will note at this point is that I was running with 3200MHz memory this time around. The were reports after the Ryzen launch that running with higher clocked memory could help improve the performance of the CPU’s in some scenarios and it’s possible that the single core clock jump we’re seeing might prove to be down as much to the increase in memory clocks as anything else. A number of people have asked me if this impacts audio performance at all, and I’ve done some testing with the production run 1800X’s and 1700X’s in the months since but haven’t seen any benefits to raising the memory clock speeds for real time audio handling. 

We did suspect this would be the outcome as we headed into testing, as memory for audio has been faster than it needs to be for a long time now, although admittedly it was great to revisit it once more and make sure. As long as the system RAM is fast enough to deal with that ASIO buffer, then raising the memory clock speed isn’t going to improve the audio handling in a measurable fashion.

The multicore results show the new AMD’s slotted in between the current and last generation Intel top end models. Whilst the AMD’s have made solid performance gains over earlier generations it has still be widely reported that their IPC scores (Instructions per clockcycle) are still behind the sort of results returned by the Intel chips.

Going back to our earlier discussion about how much code you can action on any given CPU core within a ASIO buffer cycle, the key to this is the IPC capability. The quicker the code can be actioned, then the more efficently your audio gets processed and so more you can do overall. This is perhaps the biggest source of confusion when people quote “clocks over core” as rarely are any two CPU’s comparable on clock speeds alone ,and a chip that has a better IPC performance can often outperform other CPU’s with higher quoted per clock frequencies but a lower IPC score. 

….And GO!

So lengthy explanations aside, we get to the crux of it all.

Much like the Ryzen tests before it, the Threadrippers hold up well in the older DawBench DSP testing run.

DawBench DSP Threadripper
Click To Expand

Both of the chips show gains over the Intel flagship i9 7900X and given this test uses a single plugin with stacked instances of it and a few channels of audio, what we end up measuring here is raw processor performance by simply stacking them high and letting it get on with it.

The is no disputing here that the is a sizable slice of performance to be had. Much like our previous coverage, however, it starts to show up some performance irregularities when you examine other scenarios such as the more complex Kontakt based test DawBenchVI.

DawBench VI Threadripper
Click To Expand

The earlier scaling at low buffer settings is still apparent this time around, although it looks to have been compounded by the hard NUMA addressing that is in place due to the multi chip in one die design that is in use. It once more scales upwards as the buffer is slackened off but even at the 512 buffer setting which I tested, it could only achieve 90% of CPU use under load.

That to be fair to it, is very much what I would expect from any server CPU based system. In fact, just on its own, the memory addressing here seems pretty capable when compared to some of the other options I’ve seen over the years, it’s just a shame that the other performance response amplifies the symptoms when the system is stressed.

AMD to their credit is perfectly aware of the pitfalls of trying to market what is essentially a server CPU setup to an enthusiast market. Their Windows overclocking tool has various options to set up some control and optimize how it deals with NUMA and memory address as you can see below.

AMD Control Panel
Click To Enlarge

I did have a fiddle around with some of the settings here and the creators mode did give me some marginal gains over the other options thanks to it appearing to arrange the memory in a well organized and easy to address logical group, but ultimately the performance dips we’re seeing are down to a physical addressing issue, in that data has to be moved from X to Y in a given time frame and no amount of software magic will be able to resolve this for us I suspect.

Conclusion

I think this one is pretty straight forward if you need to be running at below a 256 ASIO buffer, although there are certainly some arguments for mastering guys who don’t need that sort of response.

Much like the Intel i9’s before it, however, the is a strong suggestion that you really do need to consider your cooling carefully here. The normal low noise high-end air coolers that I tend to favour for testing were largely overwhelmed once I placed these on the bench and once the heat started to climb the water cooler I was using had both fans screaming.

Older readers with long memories might have a clear recollection of the CPU wars that gave us P4’s, Prescott’s, Athlon FX’s and 64’s. We saw both of these firms in a CPU arms race that only really ended when the i7’s arrived with the X58 chipset. Over the years this took place we saw ever raising clock speeds, a rapid release schedule of CPU’s and constant gains, although at the cost of heat and ultimately noise levels. In the years since we’ve had refinement and a vast reduction of heat and noise, but little as far as performance advancements, at least over the last 5 or 6 generations.

We finally have some really great choices from both firms and depending on your exact needs and price points you’re working at the could be arguments in each direction. Personally, I wouldn’t consider server class chips to be ultimate solution in the studio from either firm currently, not unless you’re prepared to spend the sort of money that the tag “ultimate” tends to reflect, in which case you really won’t get anything better.  

In this instance, if you’re doing a load of multimedia work alongside mastering for audio, this platform could fit your requirements well, but for writing and editing some music I’d be looking towards one of the other better value solutions unless this happens to fit your niche.

To see our custom PC selection @ Scan

To see our fixed series range @ Scan

Casting an eye over the Intel i7 Skylake X editions.

Following on from our first look at the i9 7900X, we’ve now had a chance to take a look over a few more interesting chips from this enthusiast class range refresh. 

We have before us today two more chips with the first being the i7 7800X which is the replacement for the older 6800K, once more offering us 6 physical cores with hyper-threading giving us a total of 12 logical cores to play with. It’s running a 3.5GHz base clock and features an all core turbo of 4GHz although being the 6 core it offers us the most potential to overclock we’ve seen within this range.

The second chip we have here is the 7820X and on paper it looks to be the most interesting one for me on this generation due to its price to performance ratio. Replacing the 6900K from the previous generation but coming in for around £350 less, this chip offers 2 more cores and a higher all core turbo rating along with a 1/3rd more cache than the 7800X edition.

For reference the current price at time of writing for the 7800X is £359 and the 7820X currently retails for £530.

I’m not going to go too much into the platform itself this time around, I gave some background to the changes made on this generation including possible strengths and flaws back in the i9 7900K first look over here. If you haven’t already checked that out and wish to bring yourself up to speed, now is the time to do so before we go any further.

Everyone up to speed? Then let us begin.

The Long Hot Summer

The first question I had from the off was one of how are these going to handle given the heat we saw with the 10 core? The quick answer is surprisingly well compared to the earlier testing we carried out. The retail releases I’ve been playing around with here are allowing us to drop the voltages on them to almost half the level that we expected to see with the previous generation and certainly a  few notches lower than we saw in the earlier testing we carried out.

So whilst I did hope for some marked improvements on the final release I didn’t quite expect to see it quite so quickly, normally these sorts of improvements take a few months of manufacturing refinement to appear and its great we’re seeing this right now. It certainly gives me some confidence that we’ll be seeing improvements across the range over the coming batches and I’m now far more confident that the larger i9’s that they have already announced should hold up well when they do finally arrive with us in the future.

 If I was to give a rough outline of the state of these Skylakes i7’s I’d say they are still running maybe 10% hotter than the last generation Broadwell-E clock for clock. However Intel has these designed to throttle at 105 degrees, essentially giving it 10% more overhead to play with so they do seem to be confident in these solutions running that much hotter in use over the longer term.

One thing I noted in testing was that we were seeing a lot of micro-fluctuations across the cores when load testing. By that I mean we’d see temperatures bouncing up and down by anything up to 6 or 7 degrees as we tested, but never on more than a core or two at the time and it would be pulled straight back down again moments later only for another core to fluctuate and so on.

Behind this is Intels new PCU (Package Control Unit) that has been added to Skylake X series, and whilst I did note the ability to turn it off inside of the BIOS by doing so we’d also see some additional rise in the temperatures with it disabled. One of the strengths of the PCU and these new P-States appears to be the ability to load manage well and it actively aims to offer the smoothest experience as far as power saving goes. It’s certainly welcome as it does seem to offer more control over the allocation of system performance and doesn’t appear to be causing the same sort of C-State issues we saw when that first appeared so this looks to be another welcome feature addition at this time.

Once again we’re seeing the same sort of 99% CPU load efficiency across the board as we saw when testing in Cubase on the 7900X. This I suspect is in no small part down to the board and CPU trying their hardest to strike that power to performance balance I mention above and is great to see.

Hit The Bench

On to the figures then and first up the standard synthetics in the shape of Geekbench 4 and the CPU-Z benchmark.

7800K CPU-Z 4 @ 4.4GHz

7800X CPUZ test

7800K Geekbench 4 @ 4.4GHz

Geekbench 4 7800K

The obvious comparison here it to line it up against the previous generations 6 core solution. The 6800K saw Geekbench single core scores in the region of 4400 and multi core scores around the 20500 mark, meaning that these results are sitting in the 10% – 15% increase range which is pretty much where we expect a new generation to be.

7820K CPU-Z 4 @ 4.3GHz

i7 7820X CPUz

7820K Geekbench 4 @ 4.3GHz

7820X Geek4

In a similar fashion we can take a look at the last generation 6900K which had a Geekbench score in the 4200 range and the multi-core was sitting around the 25000 level. Once again we’re looking at around a 10% gain in these synthetics, which is pretty much in line with what we’d expect.

Hold the DAW

So far, so expected and to be honest the isn’t any real surprises to be had here as we start with the DAWBench DSP test.

Skylake i7 Dawbench 4

With the 7800X can see small gains over the previous 6800K chip which is just short of the 10% mark so even perhaps just a little lower than we would have expected. In fact in this test the 7820X offers similar modest gains over the older 6900K model and doesn’t do much to surprise here us here either.

7900x DawbenchVi

The DAWbench VI test tells a similar story at the lowest buffer setting with the 7800X and 7820X both sitting roughly where we expect. What proves to be the one point of interest beyond this however is that both chips scale better than their previous iterations once you move up to the larger buffer sizes. Whilst testing these chips much like the high-end 7900K, we saw them managing to hit CPU loads around the 99% mark, but you can see that each chip scaled upwards with better results overall when compared not only with their previous edition but also when placed up against the chip above them in the previous range. 

We saw a similar pattern with the Ryzen chips too and their infinity fabric design is similar in practice mesh design found in the Skylake X CPU’s. The point of these newer mesh style designs are to improve data transference within the CPU and allow for improved performance scalability, so with both firms looking to be moving firmly in this direction we can expect to see further optimizations from software developers in the future that should continue to benefit both platforms moving forward.

Conclusion

Looking towards the future and the are already plenty of rumours already circulating regarding the expectation of a “Coffee Lake” refresh coming next. This includes a new mid-range flagship that is shaping up to offer us a contender against the 7800X and might prove to be an interesting option for anyone looking for a new system around that level, but doesn’t currently find themselves needing to pick up a new system right away.

Also we’re expecting Threadripper to arrive with us over the next few months which is no doubt the comparison that a lot of people will be waiting on. It’ll be interesting to see if the scaling characteristics that were first exhibited by Ryzen get translated across to this newer platform.  

The entry level enthusiast chips have long  proven to be the sweet spot for those seeking the best returns on the performance to value curve when considering Intel CPU’s.  This time around however whilst the 7800X is a solid chip in its own right, it’s looking like the the extra money for the 7820X  could well offer a stronger bang per buck option for those looking to invest in a system around this level. 

Click here to can see the full range of Scan 3XS Audio Systems

Intel i9 7900X First Look

Intels i9 announcement this year felt like it pretty much came out of nowhere, and whilst everyone was expecting Intel to refresh its enthusiast range, I suspect few people anticipated quite the spread of chips that have been announced over the recent months. 

So here we are looking at the first entry to Intel’s new high-end range. I’ve split this first look into 2 parts, with this section devoted to the i9 7900X and some discussion of the lower end models as the full range is explained. I’ll follow up in the near future with a forthcoming post to cover the i7’s coming in below this model, just as soon as we have the chance to grab some chips and run those through the test bench too.

There has been a sizable amount of press about this chip already as it was the first one to make it out into the wild along with the 4 core Kabylake X chips that have also appeared on this refresh, although those are likely to be of far less interest to those of us looking to build new studio solutions.

A tale of two microarchitectures.

Kabylake X and Skylake X have both launched at the same time and certainly raised eyebrows in confusion from a number of quarters. Intels own tick/tock cycle of advancement and process refinement has gone askew in recent years, where the “high end desk top” ( HEDT chips) models just as the midrange CPU’s at the start of this year have gained a third generation at the same 14nm manufacturing process level in the shape of Kabylake. 

Kabylake with the mid-range release kept the same 14nm design as the Skylake series before it and eaked out some more minor gains through platform refinement. In fact, some of the biggest changes to be found were in the improved onboard GPU found inside of it rather than the raw CPU performance itself, which as always is one of the key things missing in the HEDT edition. All this means that whilst we have a release where it’s technically two different chip ranges, the isn’t a whole lot left to differentiate between them. IN fact given how the new chip ranges continue to steam ahead in the mid-range, this looks like an attempt to help bring the high-end options back up to parity with the current mid-range again quickly which I think will ultimately help make things less confusing in future versions, even if right now it has managed to confuse things within the range quite a bit.

Kabylake X itself has taken a sizable amount of flak prior to launch and certainly appears to raise a lot of questions on an initial glance. The whole selling point of the HEDT chip up until this point has been largely more cores and more raw performance, so an announcement of what is essentially a mid-range i5/i7 grade 4 core CPU solution appearing on this chipset was somewhat of a surprise to a lot of people. 

As with the other models on this chipset range, the 4 cores are being marketed as enthusiast solutions, although in this instance we see them looking to capture a gaming enthusiast segment. The have been some early reports of high overclocks being seen, but so far these look to be largely cherry picked as the gains seen in early competition benchmarking have been hard to achieve with the early retail models currently appearing.

Whilst ultimately not really of much interest in the audio & video worlds where the software can leverage far more cores than the average game, potentially the is a solid opportunity here for that gaming market that they appear to be going after if they can refine these chips for overclocking over the coming months. However early specification and production choices have been head scratchingly odd so far, although we’ll come back to this a bit later.

Touch the Sky(lake).

So at the other end of the spectrum from those Kabylake X chips is the new current flagship for the time being in the shape of the Skylake 7900X. 10 physical cores with hyper-threading give us a total of 20 logical cores to play with here. This is the first chip announced from the i9 range and larger 12,14,16,18 core editions are all penciled in over the coming year or so, however, details are scarce on them at this time.

intel-core-x-comparison-table

At first glance it’s a little confusing as to why they would even make this chip the first of its class when the rest of the range isn’t fully unveiled at this point. Looking through the rest of range specifications alongside it, then it becomes clear that they look to be reserving the i9’s for CPU’s that can handle a full 44+ PCIe lane configuration. These lanes are used for offering bandwidth to the connected cards and high-speed storage devices and needless to say this has proven a fairly controversial move as well.

The 7900X offers up the full complement of those 44 lanes although the 7820X and 7800X chips that we’ll be looking at in forthcoming coverage both arrive with 28 lanes in place. For most audio users this is unlikely to make any real difference, with the key usage for all those lanes often being for GPU usage where X16 cards are the standard and anyone wanting to fit more than one is going to appreciate more lanes for the bandwidth. With the previous generation we even tended to advise going with the entry level 6800K for audio over the 6850K above it, which cost 50% more but offered very little of benefit in the performance stakes but did ramp up the number of available PCIe lanes, choosing instead to reserve this for anyone running multiple GPU’s in the system like users with heavy video editing requirements. 

Summer of 79(00X)

So what’s new?

Much like AMD and their infinity fabric design which was implemented to improve cross core communication within the chip itself, Intel’s arrived with its own “Mesh” technology.

Functioning much like AMD’s design, it removes the ring based communication path between cores and RAM and implements a multi-point mesh design, brought in to enable shorter paths between them. In my previous Ryzen coverage I noted some poor performance scaling at lower buffer settings which seemed to smooth itself out once you went over a 192 buffer setting. In the run up to this, I’ve retested a number of CPU’s and boards on the AMD side and it does appear that even after a number of tweaks and improvements at the BIOS level the scaling is still the same. On the plus side, as it’s proven to be a known constant and always manifests, in the same manner, I feel a lot more comfortable working with them now we are fully aware of this.

In Intels case I had some apprehension going in that given it is the companies first attempt at this in a consumer grade solution and that perhaps we’d be seeing the same sort of performance limitations that we saw on the AMD’s, but so far at least with the 7900X the internal chip latency has been superb. Even running at a 64 buffer we’ve been seeing 100% CPU load prior to the audio breaking up in playback, making this one of the most efficient chips I think I’ve possibly had on the desk.

i9 CPU load

 

So certainly a plus point there as the load capability seems to scale perfectly across the various buffer settings tested.

RAW performance wise I’ve run it through both CPU-Z and Geekbench again.CPU-Z 7900X

Geekbench 4 7900X

GeekBench 4

The multi-core result in Geekbench looks modest, although it’s worth noting the single core gains going on here compared with the previous generation 10 core the 6950X. On the basic DAWBench 4 test this doesn’t really show us up any great gains, rather it returns the sort of minor bump in performance that we’d kind of expect.

DAWBench 4 7900X

However whilst more cores can help spread the load, a lot of firms have always driven home the importance of raw clock speeds as well and once we start to look at more complex chains this becomes a little clearer. A VSTi channel with effects or additional processing on it needs to be sent to the CPU as a whole chain as it proves rather inefficient to chop up a channel signal chain for parallel processing.

A good single core score can mean slipping in just enough time to be able to squeeze in another full channel and effects chain and once you multiply that by the number of cores here, it’s easy to see how the combination of both a large number of cores and a high single core score can really translate into a higher total track count and is something we see manifest in the Kontakt based DAWBench VI test.

 

 

In this instance the performance gains over the previous generation seems quite sizable and whilst there is no doubt gains have been had from a change in architecture and that high-efficiency CPU usage we’ve already seen it should be noted here that this is close to a 20% increase in clock speed in play here too.

When we test we aim to do so around the all core turbo level. Modern Intel CPU’s have two turbo ratings, one is the “all core” level to which we can auto boost all the cores if the temperatures are safe and the other is the “Turbo 3.0” mode where it boosts a single core or it did in previous generations, but now we see it boosting the two strongest cores where the system permits.

The 7900X has a 4.5GHz 2 core turbo ability of 4.5GHz but we’ve chosen to lock it off at the all core turbo point in the testing. Running at stock clock levels we saw it boost the two cores correctly a number of times, but even under stress testing the 2 core maximum couldn’t be hit constantly without overheating on the low noise cooling solution we are using. The best we managed was a constant 4.45GHz at a temperature we were happy with, so we dialed it back to all core turbo clock speed of 4.3GHz across all cores and locked it in place for the testing, with it behaving well around this level. 

It’s not uncommon for a first few batches of silicon on any new chip range to run a bit hot and normally this tends to get better as the generation gets refined. It’s the first time we’ve seen these sorts of temperatures on a chip range however and the is a strong argument to be made for going with either one of the top 2 or 3 air coolers on the market currently or defaulting to a water loop based cooling setup for any machine considering this chip. In a tower case this shouldn’t prove a problem but for rack systems, I suspect the 7900X might prove to be off limits for the time being.

I’d fully expect the i7’s that are going to come in below it to be more reasonable and we should know about that in the next update, but it does raise some questions regarding the chips higher up in the i9 range that are due with us over the next 12 months. The has already been some debate about Intel choosing to go with thermal paste between the chip and the heatsink, rather than the more effective soldering method, although early tests by users de-lidding their chips hasn’t returned much more than 10 degrees worth of improvement, which is fairly small gain for such a drastic step. We can only hope they figure out an improved way of improving the chips thermal handling with the impending i9’s or simply return to the older soldered method, otherwise, it could be quite some time until we see the no doubt hotter 12+ core editions making it to market.

Conclusion

In isolation, it looks fine from a performance point of view and gives the average sort of generation on generation gains that we would expect from an Intel range refresh, maybe pumped up a little as they’ve chosen to release them to market with raised base clocks. This leaves little room for overclocking, but it does give the buyer who simply wants the fastest model they can get out of the box and run it at stock.

The problem is that this isn’t in isolation and whilst we’ve gotten used to Intel’s 10% year on year gains over recent generations, there has to be many a user who longs for the sort of gains we saw when the X58 generation arrived or even when AMD dropped the Athlon 64 range on us all those years ago.

Ryzen made that sort of gain upon release, although they were so far behind that it didn’t do much more than breaking them even. This refresh puts Intel in a stronger place performance wise and it has to be noted that this chip has been incoming for a while. Certainly since long before Ryzen reignited the CPU war and it feels like they may have simply squeezed it a bit harder than normal to make it look more competitive.

This isn’t a game changer response to AMD. I doubt we’ll be seeing that for a year or two at this point and it will give AMD continued opportunities to apply pressure. What it has done however is what a lot of us hoped for initially and that it is forcing Intel to re-examine its pricing structure to some degree.

What we have here is a 10 core CPU for a third cheaper than the last generation 10 core CPU they released. Coming in around the £900 it rebalances the performance to price ratio to quite some degree and will no doubt once more help make the “i” series CPU’s attractive to more than a few users again, after a number of months of it being very much up for debate in various usage segments. 

So will the impending AMD Threadripper upset this again?

I guess we’re going to find out soon enough over the coming months, but one thing for sure is that we’re finally seeing some competition here again, firstly on pure pricing but surely this should be a safe bet for kick starting some CPU advancements again. This feels kinda like the Prescott VS Athlon 64 days and the upshot of that era was some huge gains in performance and solid improvements being made generation upon generation.

The cost and overall performance here keeps the 7900X in the running despite its obvious issues, and that raw grunt on offer makes it a very valid choice where the performance is required. The only real fly in the ointment is the heat and noise requirements most audio systems have, although hopefully as the silicon yields improve and refine this will mature into a cooler solution than it is now. It’s certainly going to be interesting to see how this pans out as the bigger models start making it to market over the coming year or so and of course with the smaller i7 brethren over the coming days.

To see our complete audio system range @ Scan

 

Time to light the FUSE?

Arturia has announced that possibly one of the most awaited audio interfaces of all time is imminently due to arrive with us.

Initially unveiled back at the NAMM show in January 2015 and billed as a “revolutionary next gen-pro audio interface” the AudioFUSE got a lot of interest as a feature packed interface that looked to be a step ahead of lot of the competition at the time.

So what happened? Well Arturia have published a little video explaining the delay and to be fair it’s commendable. They take on board that they may have been a little keen in the initial announcement and have spent the time since listening to feedback from their beta testers as well as improving the manufacturing process. All good to hear and hopefully should result in a far superior product. You can hear what they have to say in their own words below.

So two years down the line and now that it is finally due to arrive with us how does it look now?

Still very promising from what we can see. 

The goal of the interface hasn’t changed. What we have here is a ultra-portable recording solution that doesn’t rely upon troublesome breakout cables for all its I/O handling. It’s built in a solid aluminum chassis and promises to be able to be capable of being thrown in your bag and taken out on the road in order to give you studio quality recordings wherever you are.  

Audio Fuse Specifications

1. Inserts

Add external line-level devices such as compressors into the signal flow before digital conversion.

2. MIDI in/out

Connect any MIDI instrument or equipment with the supplied MIDI cable adapters.

3. Word Clock & S/PDIF in/out

Sync any Word Clock equipment or connect to any S/PDIF digital audio device.

4. ADAT in/out

Connect to any ADAT equipment with up to 8 digital inputs and 8 digital outputs.

5. USB hub

3-port USB hub to connect your master keyboard, USB stick, dongle, and more.

6. USB connection

Connect AudioFuse to your computer, tablet or phone. Most features are available even with only the USB power supplied by a computer.

7. Phono/line inputs 3&4

Connect external phono or line devices to these RCA+ground and balanced 1/4” inputs.

8. Speaker outputs A&B

Connect two pairs of speakers to these balanced 1/4” outputs for easy A/B monitor switching.

9. Input control sections 1&2

Direct access to each feature of analog inputs 1&2: input gain with VU-metering, true 48V, phase invert, -20dB pad and instrument mode.

10. Output control section

Direct access to each of the analog output features: output level with VU-metering, audio mix selection, mono mode, output dimming, mute, and speaker A/B selection.

11. Direct monitoring

Enjoy zero-latency monitoring of the recorded signals and blend them into your mix.

12. Phones control sections 1&2

Direct access to each of the features of headphone outputs 1&2: output level, mono mode and audio mix selection.

13. Talkback

Press a button to communicate with talent in another room via the built-in microphone.

14. Input channels 1&2

Connect microphones, instruments or line devices to the 2 XLR/balanced 1/4” combo inputs.

15. Phones output channels 1&2

Don’t bother looking for a 1/4” or 1/8” phones adapter; AudioFuse has both connectors for each phones output.

Sound Quality

Outside the physical product features, Arturia are keen to show off their DiscretePro preamps with a signal to noise ratio of <-129dB and frequency response between 20HZ and 20kHz of +/-0.05db promising an extremely flat and clean signal path for your recording.

Designed to achieve low distortion rates and dedicated pre-amps for both the line and mic channels they’ve clearly strived to make this a great unit for recording and the is a  bit on the testing and development process to be found in the video below.

Final Thoughts

Its been a long time coming, but the AudioFUSE should finally be with us around June the 8th. The feature set promises to give us a very capable and flexible product if it proves to be a strong performer. The biggest unknown here however is just how great a performer it will be, and as Arturia are a new entry to this arena driver performance is going to be an unknown quality until we see one on the bench.

The is a lot of competition at the £500 price point this unit is landing at, including a number of high performance Thunderbolt and USB units. The included feature set certainly has enough of a punch to keep it relevent in todays busy marketplace and hopefully that all that  extra R&D time is going to pay off for the patient user in the end.

The Arturia AudioFUSE range @ Scan

sE Electronics Expand Their X1 Series With The New X1 S Studio Condenser Mic

The X1 is renowned for its sound quality and versatility at a budget price. There’s been several variations released over the years and just last year we saw a follow up to the original X1, the X1 A. Now we have another revision, the X1 S which boasts some new and improved features. This latest revamp comes housed in an all-metal body and utilizes a hand-made condenser capsule.  It features two high-pass filters as well as a 3 position attenuation switch. An SPL rating of 160dB is also worth a mention… Not bad for an all-purpose large-diaphragm condenser mic.

The X1 S is set to be priced at $249/€249 and is expected to be available in May.

The Vocal Pack and Studio Bundle have also been updated. The X1 S Vocal Pack and X1 S Studio Bundle are set to be priced at $299/€299 and $399/€399 respectively.

Head on over to their product page for more info.

SE Electronics Products @ Scan

To D Or Not To D…Behringer Targets Classic Synth Revivals

Last week the internet spontaneously combusted at some news from Uli Behringer that his company was thinking about developing a cheap ($400) Minimoog Model D clone. Ok, that may be a little bit of an exaggeration but the salt was definitely flowing over at the Gearslutz forum and it’s safe to say that Mr Behringer sure knows how to split people into two distinct camps. His following comment rubbed some people up the wrong way:

“Many people have asked us to revive synth jewels from the past and make them affordable so everyone can own one. This very much resonates with me because when I was a kid, I spent hours in stores playing and admiring those synths – however I couldn’t afford them which was tremendously frustrating.

Frankly, I never understood why someone would charge you US$ 4,000 for a MiniMoog, when the components just cost around US$ 200.”

I actually see a lot of merit in this and hot on the heels of the success of the Deepmind12 I see no reason why the company shouldn’t use the engineer talent from MIDAS for every good use they can possibly think of. I highly doubt with that level of talent on-board that they will settle for simple clones, the Juno 106 clone that slowly transformed into the DM12 is a prime example of this train of thought.

Anyway, I digress. On one hand we have a group of people who think that this behaviour is total sacrilege. More so coming from Uli’s company who have the manufacturing facilities and buying power to make this cheap Model D a reality. Certain commenters have been quite vocal about this being a case of big B throwing a serious spanner in the works for the smaller boutique companies who simply cannot compete in terms of buying power and manufacturing cost – especially as Moog have reissued the Model D recently themselves.

On the other hand we have struggling musicians and those who don’t care what badge a piece of gear has, as long as it sounds good and they can afford it then its a win-win. After all the posturing and proverbial mud slinging, Uli posted a gorgeous little render of a Eurorack compatible Model D clone…and with it the internet fire seemed to be suffocated by the lack of oxygen due to the communal gasps.

Some of the staunchest naysayers on the thread seemed to relax a little once Uli revealed it wasn’t going to be just a cheap 1:1 clone. Among the ashes of the weeks bickering and moderated comments, there were posts that say Behringer may have plans for another 20 synths. Wait, what?!

Fast forward a week, Uli has now teased that the OSCar and ARP 2600 are high on the priority list and I am personally over the moon! He left this comment on Gearslutz last night

“Aside from the Oscar synth, I can confirm that the 2600 is high on our priority list as it is a truly remarkable synth; I always wanted one for myself:-)

We are currently trying to acquire an original unit for benchmark purposes.

We hope we will be able to show you a first design draft within the next few weeks, while we’re studying the circuit diagrams to provide you with an estimated retail price.

Once ready we will reach out to you to see if there is enough interest.”

Considering I’m not rolling in dosh, I have never had the opportunity to play on any of these mythical beasts, and his projected price range is much more suitable for someone like myself. I personally don’t see these units taking any market share away from the likes of Moog as they are aimed at completely different price points and completely different customer bases that very rarely venture into each others gear territory.  Uli Behringer I salute you!

For more info be sure to head over to the Gearslutz forums, unfortunately some of the more colourful posts have been moderated but these threads are still a fantastic read.

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/1142144-behringer-mini-model-d-good-idea.html

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/1141074-what-synths-should-behringer-make-next.html

Uli’s post regarding the OSCar and ARP 2600:

https://www.gearslutz.com/board/electronic-music-instruments-electronic-music-production/1141074-what-synths-should-behringer-make-next-9.html#post12499056

If this little article has got your synth buds salivating, why not take a look at the oscillating goodness we have in store!

https://www.scan.co.uk/shop/pro-audio/instruments/synthesizers

AMD Ryzen First Look For Audio

Ryzen is finally with us and it is quite possibly one of the most anticipated chipset launches in years, with initial reports and leaked benchmarks tending to show the whole platform in very favourable light.

However when it comes to pro audio handling we tend to have different concerns over performance requirements, than tends to be outlined and covered by more regular computer industry testing. So having now had a chance to sit and work with an AMD 1700X for a week or so, we’ve had the chance to put this brand new tech through some more audio-centric benchmarking, and today we’ll take a first look at this new tech and see if its right for the studio.

AMD has developed a whole new platform with the  focus based around  improving low level performance and raising the “IPC” or Instructions per clock cycle figure. As ever they have been keen to keep it affordable with certain choices having been made to keep it competitive, and to some extent these are the right choices for a lot of users.

Ryzen Chipset Features

The chipset gives us DDR4 memory but unlike the X99 platform restricts us to dual channel RAM configurations and a maximum of 64GB across the 4 RAM slots which may limit its appeal for heavyweight VSL users. The is a single M.2. connection option for a high speed NVMe drive and 32 lanes for the PCIe connections, so the competing X99 solutions still offer us more scope here, although for the average audio system the restrictions above may offer little to no real downsides at least from a configuration requirements point of view.

One thing missing from the specification however that has an obvious impact in the studio is the lack of Thunderbolt support. Thunderbolt solutions require BIOS level and physical board level support in the shape of the data communication header found on Intel boards, and Thunderbolt itself is an Intel developed standard along with Apple backing. Without either of those companies appearing to be keen to licence it up front, we’re unlikely to see Thunderbolt at launch although the little to say that this couldn’t change in later generations, if the right agreements can be worked out between the firms involved.

Early testing with the drivers available to us have so far proven to be quite robust, with stability being great for what is essentially a first generation release of a new chipset platform. We have seen a few interface issues regarding older USB 2 interfaces and USB 3 headers on the board, although the USB 3 headers we’ve seen are running the Microsoft USB3 drivers, which admittedly have had a few issues over on the Intel boards with certain older USB 2 only interfaces so this looks to be constant between both platforms. Where we’ve seen issues on the Intel side, we’re also seeing issues on the AMD side, so we can’t level this as being an issue with the chipset and may prove to be something that the audio interface guys can fix with either a driver or firmware update.

Overclocking has been limited in our initial testing phase, mainly due to a lack of tools. Current windows testing software is having a hard time with temperature monitoring during our test period, with none of the tools we had available being able to report the temps. This of course is something that will no doubt resolve itself as everyone updates their software over the next few weeks, but until then we tried to play it safe when pushing the clocks up on this initial batch.

We managed to boost our test 1700X up a few notches to around the level of the 1800X in the basic testing we carried out, but taking it further lead to an unstable test bench. No doubt this will improve after launch as the initial silicon yields improve and having not seen a 1800X as yet, that may still proved to be the cherry picked option in the range when it comes to overclocking.

One of the interesting early reports that appeared right before launch was the CPUid benchmark result which suggests that this may shape up to be one of the best performing multi-core consumer grade chips. We set out to replicate this test here and the result of it does indeed look very promising on the surface.

Ryzen 1700x CPU id results

We follow this up with a Geekbench 4 test, which itself is well trusted as a cross platform CPU benchmark and in the single core performance reflects the results seen in the previous test with it placing just behind the i7 7700K in the results chart. The multi-core this time around whilst strong looks to be sat behind the 6900K and in this instance sitting under the 6800K and above the 7700K.

GeekBench 4 AMD 1700X

So moving on to our more audio-centric benchmarks and our standard Dawbench test is first up.  Designed to load test the CPU itself, we find ourselves here stacking plugin instances in order to establish the chips against a set of baseline level results. The AMD proves itself strongly in this test, placing mid-way between the cost equivalent 6 core Intel 6800K and far more expensive 6900K 8 core. With the AMD 1700X offering us 8 physical cores along with threading on top to take us to a virtual 16 cores, this at first glance looks to be where we would expect it to be with the hardware on offer, but at a very keen price point.

Ryzen DPC Test

I wanted to try a few more real world comparisons here so first up I’ve taken the Dawbench test and restricted it to 20 channels of plugins. I’ve then applied this test over each of the CPUs we have on test, with the results appearing under the “Reaper” heading on the chart below.

Sequencer AMD 1700X

The 1700X stands up well against the i7 7700k but doesn’t quite manage to match up with Intel chips in this instance. In a test like this where we’re not stressing the CPU itself or trying to overload the available bandwidth, the advantages in the low level microarchitecture tend to come to the fore and in this instance the two Intel chips based around the same platform perform roughly in line with each other, although in this test we’re not taking into account the extra bandwidth on offer with the 6900K edition.

Also on the same chart we  see two other test results with  one being the 8 Good Reasons demo from Cubase 8 and we tried running it across the available CPUs to gain a comparison in a more real world project. In this instance the results come back fairly level across the two high end Intel CPU’s and the AMD 1700X. The 4 core mid-range i7 scores poor here, but this is expected with the obvious lack of a physical cores hampering the project playback load.

We also ran the “These Arms” Sonar demo and replicated the test process again. This tests results are a bit more erratic this time around, with a certain emphasis looking to be placed on the single core score as well as the overall multi core score. This is the first time we see the 1700X falling behind the Intel results.

In other testing we’ve done along the way in other segments we’ve seen some of the video rendering packages and even some games exhibiting some CPU based performance oddness that has looked out of the ordinary. Obviously we have a concern here that the might be a weakness that needs to be addressed when it comes to overall audio system performance, so with this result in mind we decided to dig deeper.

To do so we’ve made use of the DAWBench Vi test, which builds upon the basic test in DAWBench standard, and allows us to stack multiple layers of Kontakt based instruments on top of it. With this test, not only are we place a heavy load on the CPU, but we’re also stressing the sub-system and seeing how capable it is at quickly handling large complex data loads.

DAWBench Vi

This gave us the results found in the chart above and this starts to shine some light on the concerns that we have.

In this instance the AMD 1700X under-performs all of the Intel chips at lower buffer rates. it does scale up steadily however, so this looks to be an issue with how quickly it can process the contents of a buffer load.

So what’s going on here? 

Well the other relevant information to flesh out the chart above is just how much CPU load was being used when the audio started to break up in playback.

AMD 1700X 3.8 @ GHz

64 = 520 count @ 70% load
128 = 860 count @ 72% load
192 = 1290 count @ 85% load

Intel 6800k 3.8 @ GHz

64 = 780 count @ 87% load
128 = 1160 count @ 91% load
192 = 1590 count @ 97% load

Intel 6900k 3.6 @ GHz

64 = 980 count @ 85% load
128 = 1550 count @ 90% load
192 = 1880 count @ 97% load

Intel 7700k @ 4.5GHz

64 = 560 @ 90% load
128 = 950 @ 98% load
192 = 1270 @ 99% load

So the big problem here appears to be inefficiency at lower buffer rates. The ASIO buffer is throwing data at the CPU in quicker bursts the lower you go with the setting, so with the audio crackling and breaking up it seems that the CPU just isn’t clearing the buffer quickly enough once it gets to around 70% CPU load at those lower 64 & 128 buffer settings

Intel at this buffer setting looks to be hitting 85% or higher, so whilst the AMD chip may have more RAW performance to hand, the responsiveness of the rest of the architecture appears to be letting it down. It’s no big secret looking over the early reviews that whilst AMD has made some amazing gains with the IPC rates this generation they still appear to be lagging slightly behind Intel in this performance metric.

So the results start to outline this as one of the key weaknesses in the Ryzen configuration, with it becoming quite apparent that the are bottle necks elsewhere in the architecture that are coming into play beyond the new CPU’s. At the lower buffer settings the test tends to benefit single core performance, with the Intel chips taking a solid lead. As you slacken off the buffer itself, more cores become the better option as the system is able to spread the load but even then it isn’t until we hit a 192 buffer setting on the ASIO drivers that the performance catches up to the intel 4 Core CPU.

This appears to be one section where the AMD performance still seems to be lacking compared with the Intel family be that due to hardware bottle necks or still not quite having caught up in the overall IPC handling at the chipset level. 

What we also see is the performance start to catch up with intel again as the buffer is relaxed, so it’s clear that a certain amount of performance is still there to be had, but the system just can’t access it quickly enough when placed under heavy complex loads.

What we can safely say having taken this look at the Ryzen platform, is that across the tests we’ve carried out so far that the AMD platform has made some serious gains with this generation. Indeed the is no denying that the is going to be more than a few scenarios where the AMD hardware is able to compete and will beat the competition.

However with the bottlenecks we’ve seen concerning load balancing of complex audio chains, the is a lot of concern here that it simply won’t offer the required bang per buck for a dedicated studio PC. As the silicon continues to be refined and the chip-set and drivers are fine-tuned then we should see the whole platform continue to move from strength to strength, but at this stage until more is known about those strength and weaknesses of the hardware, you should be aware that it has both its pros and cons to consider.

The Full Scan 3XS Pro Audio Workstation Range

Back To Black – Antelope Bring Out A New Addition To The ORION Family, The ORION32 HD

Bulgaria seems to be a bit of a busy place for Pro Audio at the moment with Antelope pushing out some absolutely fantastic gear and progressing on the software side of things at a staggering pace. The Balkan Mountains truly are alive with the sound of music…courtesy of Antelope’s pristine AD/DA conversion and supreme clocking!

Truth be told, it’s a pain (in a good way) keeping the content updated for their products, the guys and girls over at Antelope are like a machine, churning out updates to their FPGA based ecosystem of interfaces – giving their customers extra functionality and making additions to the free FPGA FX packages.

Enter the new addition to the family – the stunning looking ORION32 HD was announced at NAMM and I imagine Pro Tools HD users are pretty happy. It’s compatible with any DAW on the market via a HDX port or USB 3.0. This means no matter what you choose to use in the studio you can benefit from 64 channel 192kHz audio I/O, Zero Latency Monitoring, industry-leading AFC clocking and as you’d expect, flawless conversion. Throw in the usual plethora of connectivity such as ADAT, MADI, S/PDIF, Wordclock and mastering-grade Monitor outs and you have a beast of an interface. Now if you have a Pro Tools HD setup and a customer comes in with a laptop project made on Cubase, Logic or Presonus etc then you can simply use the USB port to integrate their project directly into your existing setup – no extra interfaces needed!

Coming back to the free FPGA FX package, if you’re not aware of it – Antelope have integrated a fine selection of free hardware-based vintage FX. The ever growing library includes hardware-based vintage EQ’s, compressors and Auraverb. The exquisite collections of Vintage EQ’s include authentic models of Lang PEQ-1, BAE 1073, 1084, 1023, UK-69 and many other classic British and German vintage units. If vintage compressors are more to your favour, how about a life-like model of the FET76 aka the legendary UREI 1176LN?

One more thing to note is the impressive software control. The routing matrix is pretty neat and even that has had some little tweaks over the last few software revisions to make it even easier to use. The control panel can be run on any machine on the network allowing for remote access to all the important stuff and as a side bonus, the control panel can be resized at will – its the little things that can make a huge difference in the right circumstances!

The ORION32 HD is available for pre-order at Scan Pro Audio, go take a look!

Roland Announce Rubix Line Of Audio Interfaces

Roland have announced their new line of portable USB audio interfaces for Mac, PC and iPad. The Rubix line consists of 3 units – The Rubix 22, 24 and the 44 all of which are designed with transparent, low-noise pre-amps and support for audio resolutions up to 24-bit/192kHz.

The Rubix 22 has 2 ins/2 outs, the 24 has 2 ins/4 outs and the 44 as you can probably guess has 4 ins/4 outs. All 3 interfaces also support MIDI I/O and feature combo jack inputs, Hi-Z inputs and headphones outputs. The 24 and 44 also feature a hardware compressor/limiter! A ground lift switch on the back should also help laptop users with ground loop issues. There’s a switchable power source on the 22 and 24 which allows you to power the interfaces with a USB battery when connecting them up to an iPad. The 44 does however require an AC adapter. The activity LEDs are visible from the front and the top of the units, making monitoring signal input from any angle very convenient .

Key Features

  • 2-in/2-out / 2-in/4-out / 4-in/4-out USB audio interface.
  • 2 / 4 low-noise mic preamps with XLR combo jacks.
  • Hardware compressor/limiter (Rubix 24 & Rubix 44).
  • Hi-Z input for guitar and other high impedance sources.
  • MIDI In/Out ports.
  • Extensively shielded, low-noise design.
  • Sturdy and compact metal construction.
  • Easy-to-read indicators show vital information.
  • Low latency, class compliant drivers.
  • Ground lifts for quiet operation in a variety of venues.
  • Includes Ableton Live Lite.

Info on pricing & availability yet to be confirmed.

Roland Hardware @ Scan

Take Kontrol with Native Instruments winter deals

Live now and running through to early in the new year, we have a selection of superb deals from Native Instruments giving you even more reason to keep warm in the studio over coming months.

Traktor Kontrol S8

The flagship all in one controller hits our lowest price point yet, retailing at just £799.

Capable of running as a 4-channel stand-alone mixer, professional audio interface, and featuring enhanced Stems ready decks for ultimate control over the included latest 2.11 version of Traktor Scratch Pro.

Touch-sensitive controls and high-res displays deliver DJ workflow that permits you to connect anything your setup needs, Traktor Kontrol S8 boasts the most expansive connectivity on a DJ controller yet.

Traktor Kontrol S8 @ Scan

Komplete Kontrol S-Series

The full range of the S-Series keyboards has taken a price tumble with the S49, S61 & S88 all receiving around £80 our previous price. The keyboards are all built around superb Fatar keybeds and offer NKS functionality which offers more in depth mapping with Native plugins and selected third part instruments. Of course regular VST support is included although anyone who already own Komplete will benefit most from some of the extra functionality on offer here. In fact the keyboard itself ships with Komplete Select and in support of the Keyboard price crash, we are offering a number of Komplete upgrade deals from Select that could save you a packet over the regular stand alone prices. For example if you pick up a Komplete Kontrol S49 and Our Komplete 11 upgrade package, this would save you almost £300 off our regular pricing!

Check out the full range of Komplete Kontrol Offers here.

Komplete Audio 6

It’s always a popular offer when we run it, and the Komplete Audio 6 has always been well regarded here in Scan. Tank like build quality combined with some of the best performing drivers we’ve seen on any interface under £500 make this a very popular interface. Offering a 4 in / 4 out I/O selection, MIDI control, a pair of pre-amps and a very handy physical volume control, the Komplete Audio 6 is a great interface for anyone getting into making music or even those making their first upgrade.

The Komplete Audio KA6 for just £149 @ Scan

See the full Native Instruments range @ Scan