Gigabyte G1 Gaming GTX 1070 review: a stellar performer for its price


The GTX 1070 isn’t at the top of Nvidia’s product stack. That said, it’s probably the most interesting card available for gamers who want high-end performance, but can’t afford to drop ridiculous amounts of money on an ultra-high-end GPU. Gigabyte was kind enough to send over its G1 Gaming GTX 1070, and we’ve put the card through its paces.
We’ve already talked about the GTX 1070’s architecture, but since this is our first Pascal review, let’s take a moment to review the core. The GTX 1070 has 1,920 CUDA cores, 120 texture mapping units, and 64 ROPS (this is often written as a 1920:120:64 configuration). It uses 8GB of conventional GDDR5 clocked at 8Gbps for 256GB/s of memory bandwidth. It doesn’t pack quite the oomph of the GTX 1080 — it’s functionally limited to setting up 48 pixels per clock, for one thing, even if it technically has all 64 ROPS.
General benchmarks have already shown the GTX 1070 to be a tough customer, capable of besting Nvidia’s previous GTX Titan X GPU. So what does Gigabyte bring to the table with its G1 Gaming GPU? As it turns out, a fair bit.
GTX1070-1
Gigabyte’s G1 doesn’t use Nvidia’s reference-style blower. Instead, it offers three separate fans in an open-aired cooler. Whether you prefer a blower or an open-air cooler is partly a matter of taste and partly a matter of airflow. Because a blower exhausts hot air directly from the second PCI Express slot, they’re typically considered a better solution for low-airflow cases. If your case doesn’t run hot, there shouldn’t be much of a performance difference between the two.
Gigabyte has taken a page from other PC manufacturers like Razer and begun including its own LED lighting system for its GPUs. This is the first time we’ve seen the company’s customizable lighting solution in action and it’s easy to control or modify thanks to the included Xtreme utility (more on that in a bit).
GTX1070-3
Power is provided via a single 8-pin connector
The G1 Gaming 1070 uses 6+2 power phases rather than the 4+1 system Nvidia’s Founder Edition GPU’s field and the company claims its composite heat pipe technology improves cooling capacity by up to 29% compared with other types of heat pipe design. The heat pipes themselves are made of pure copper and run directly over the GPU, which should improve the heat sink’s performance under load.

Nvidia Boost and overclocking

When Nvidia designed Pascal, it made some significant changes to its GPU Boost technology — changes that are significant in the context of our review. Up until now, GPU Boost has been similar to its CPU counterpart: AMD and Nvidia ship a card at a guaranteed base clock with a “best effort” boost clock that the GPU will try to hold if thermals and power consumption permit.
Nvidia Boost 3.0 is only present on the 1080 and 1070, but it has a significant impact on the maximum potential clock speed of the GPU. For example, our Gigabyte G1 Gaming GTX 1070 has a base clock of 1620MHz and a boost clock of 1822MHz when running in “OC Mode” and a 1594MHz base clock and 1784MHz Boost Clock in “Game Mode.”
GPU-Boost-3
Neither of those numbers tells the whole story on how fast the G1 Gaming can run, however. When I started trying to test the GPU’s overclocking performance I was surprised to note that the card overclocked quite poorly, with barely 100MHz of headroom. When I dug into exact clock speeds using GPU-Z, I found the card was already sustaining a 1911MHz clock rate across multiple benchmark loops. It would spike to 1949MHz, drop to 1923MHz, and then level off, in multiple titles, at 1911MHz.
Nvidia describes Nvidia boost 3.0 as a function that “boosts the card’s clock speed in real-time based on the target temperature. If the card is running below the set target temperature, GPU Boost 3.0 will increase the clock speed to improve performance. The target temperature can be reset depending on your preference so you can have the card run more quietly for everyday tasks and older games, and run at full tilt during intense high-resolution gaming sequences.”
Not only does this function work, it works brilliantly — but its impact on the overclocking results you may see has been poorly explained. The effective, sustained boost clock of our GTX 1070 wasn’t 1822MHz — it was 1911MHz. When our GPU started throwing artifacts at the +100MHz overclocking mark, it wasn’t running at 1922MHz, but over 2GHz.
This has two practical implications for enthusiasts. First, if your GPU doesn’t seem to overclock well, make sure you use GPU-Z to check your GPU’s actual top-end frequency. If your card is running at 1900MHz or higher to start with, you may not see much headroom because there’s not much headroom left to take advantage of. Second, improving your system cooling or GPU cooler may yield better results than just cranking up clock speed sliders. The 1070 and 1080 are both designed to take temperature and TDP into consideration when calculating your max available frequency, so a GPU that is redlining won’t see much benefit regardless of the settings you dial into any overclocking application.

Gigabyte’s Xtreme Software Tuner: Surprisingly functional

For years, OEMs have included various bits of customized software with their video cards and motherboards — and for years, these included bundles haven’t been very good. At best, these utilities tend to offer a themed skin for an otherwise-free bit of software.
Gigabyte-Crop3
Gigabyte’s Xtreme Engine is a remarkable exception to this trend. The software offers both the option to tweak your GPU’s color palette and shift pattern as well as the ability to overclock the card directly or define a custom fan profile. The color palette and customization options are shown above.
Gigabyte-Crop1
Clock controls are laid out in a way that allows the end user to define either a percentile increase or to make a flat adjustment, MHz by MHz. None of these options is revolutionary, but they’re presented well.
Gigabyte-Crop2
Users who don’t want to delve too deeply into the more arcane settings can make simple changes from the “Easy Settings” menu. OC Mode slightly increases the GPU’s clocks while Gaming Mode sets stock clocks to Nvidia’s reference design. Eco Mode sets slightly lower clocks and reduces power consumption. Clock offsets can also be controlled from here.
I don’t know how many people actually buy GPUs on the strength of their included software, but Gigabyte’s Xtreme Tuning Engine is actually pretty useful. The GeForce 1070 the company sent us also shipped in Gaming Mode by default, but switching over to the slightly faster OC Mode is easy.

Test Setup

The Gigabyte G1 1070 is largely in a class of its own, at least as far as AMD is concerned. Still, we need to put some kind of figures from Team Red on the board, and after watching prices for several weeks, there’s been multiple Fury X’s selling for around the $400 mark. That’s roughly the same price band as the GTX 1070 is selling into, so it’s our point of comparison for this review.
All GPUs were tested with a Haswell 5960X eight-core CPU and an Asus X99-Deluxe motherboard. Nvidia’s 368.81 and AMD’s 16.7.3 driver were used for the appropriate cards. All of our power consumption figures were run using an Antec 750W 80 Plus Gold PSU, not the larger-but-less efficient Thermaltake 1200 80 Plus we used for a number of older reviews.
All GPUs were tested in both 1080p and 4K, as these represent two of the most common resolutions. Let’s see what Gigabyte’s GTX 1070 G1 Gaming can do, shall we?

BioShock Infinite

BioShock Infinite is a DirectX 11 title from 2013. We tested the game in 1080p and 4K with maximum detail and the alternate depth-of-field method. While BioShock Infinite isn’t a particularly difficult lift for any mainstream graphics card, it’s a solid last-gen title based on the popular Unreal Engine 3.
BioShockInfinite-1070
The Fury X and GTX 980 Ti actually put up a very solid fight in BioShock Infinite, with the GTX 980 Ti actually narrowly winning at 1080p and losing by just 4% in 4K. AMD’s Radeon R9 Fury X also puts up competitive numbers.

Company of Heroes 2

Company of Heroes 2 is an RTS game that’s known for putting a hefty load on GPUs, particularly at the highest detail settings. Unlike most of the other games we tested, COH 2 doesn’t support multiple GPUs. We tested the game with all settings set to “High,” with V-Sync disabled.
COH2-1070
Here, we see a larger difference between the GTX 980 Ti and the GTX 1070, with Nvidia’s newer GPU pulling ahead in 4K by 13%. The gains are narrower compared with the Radeon R9 Fury X, but AMD’s GCN has always performed better in 4K than 1080p compared with Team Green. Overall the GTX 1070 leads the pack, though not by a huge margin.

Metro Last Light Redux

Metro Last Light Redux is the remastered version of Metro Last Light with an updated texture model and additional lighting details. Metro Last Light Redux’s benchmark puts a fairly heavy load on the system and should be seen as a worst-case run for overall game performance. We test the game at Very High detail with SSAA enabled.
Redux-1070
At 1080p the Gigabyte GTX 1070 is a full 1.46x faster than the GTX 980 and 11% faster than both the R9 Fury X and the GTX 980 Ti. None of our GPUs can handle 4K with SSAA, which isn’t surprising, considering that workload will be closer to 8K than 4K. The GTX 1070 is still about 6% faster than the GTX 980 Ti and a consistent 1.46x faster than the GTX 980.

Total War: Rome 2

Total War: Rome II is the sequel to the earlier Total War: Rome title. It’s fairly demanding on modern cards, particularly at the highest detail levels. We tested at maximum detail levels in 1080p and 4K with SSAO and Vignette enabled.
TW2-1070
Total War: Rome 2 is only a modest win for the Gigabyte G1 at 1080p, but the GPU extends its lead to 12.5% at 4K. That’s not enough of a lead to justify upgrading from a card as expensive as the GTX 980 Ti, but if you dropped $450 on a GTX 980 a few years back the 1070 is a 41% frame rate improvement at 4K.

Shadow of Mordor

Shadow of Mordor is a third-person open-world game that takes place in between The Hobbit and the Lord of the Rings. Think of it as Grand Theft Ringwraith, and you’re on the right track. We tested at maximum detail in 1080p and 4K with FXAA enabled (the only AA option available).
Mordor-1070
In Shadow of Mordor, the GTX 1070 runs the tables at both 1080p and 4K. Even AMD’s Fury X, which pulls ahead of the GTX 980 Ti by almost 9% at the higher resolution can’t catch Nvidia’s 16nm Pascal GPU here.

Dragon Age: Inquisition

Dragon Age: Inquisition is one of the greatest role playing games of all time, with a gorgeous Frostbite 3-based engine. While it supports Mantle, we’ve actually stuck with Direct3D in this title, as the D3D implementation has proven to be superior in previous testing.
While DAI does include an in-game benchmark, we’ve used a manual test run instead. The in-game test often runs more quickly than the actual title, and is a relatively simple test compared with how the game handles combat. Our test session focuses on the final evacuation of the town of Haven, and the multiple encounters that the Inquisitor faces as the party struggles to reach the chantry doors. We tested the game at maximum detail with 4x MSAA.
DAI-1070
DAI’s MSAA gives even high-end GPUs a fit at 4K and the Gigabyte G1 1070 is no exception. While it’s slightly faster than the GTX 980 Ti, it’s not fast enough to actually render a playable frame rate. At 1080p, none of our test GPUs have a problem with the game at these detail settings and the 1070’s 10% gain over the GTX 980 Ti is noticeable, but not extraordinary. The GTX 980, in contrast, is left far behind at both detail levels, even if it handles the 1080p settings capably enough.

Rise of the Tomb Raider

The sequel to 2013’s rebooted Tomb Raider franchise, Rise of the Tomb Raider has been recently updated with improved DirectX 12 support. We tested the game at the Very High detail preset with FXAA enabled at 1080p and 4K.
Rise-1070
One of the discrepancies between Maxwell and AMD’s GCN has always been the way the two architectures handled 1080p vs. 4K. AMD GPUs tend to take a much smaller hit when stepping up to 4K and that’s definitely on display here — at 1080p, the Fury X is barely faster than the GTX 980 and far behind the GTX 980 Ti and GTX 1070. Step up to 4K, and the Fury X is 1.18x faster than the GTX 980 and basically tied with the GTX 980 Ti. Both cards are surpassed, however, by the Gigabyte G1 Gaming, which outperforms the GTX 980 Ti by a further 1.10x.

Ashes of the Singularity

Ashes of the Singularity is one of the first mainstream DirectX 12 titles. It’s an RTS game that’s designed to take full advantage of DX12 features like asynchronous compute and we’ve covered it since it launched in Early Access almost a year ago. We benchmarked the game in 1920×1080 with the Extreme detail preset.
Ashes-1070
Ashes of the Singularity is a rare win for AMD’s older Fury X, but it also highlights the strength of the Pascal family compared with Nvidia’s Maxwell GPUs. The GTX 1070 ties the Fury X at 1080p but slips narrowly behind at 4K. It’s interesting to see the same 1080p/4K distinction between Pascal and GCN — like Maxwell, Pascal tends to lose slightly more top-end performance than AMD does when we step up the resolution.
While it doesn’t take the lead in this title, the Gigabyte GTX 1070 is still 16-22% faster than the top-end Maxwell card we tested.

Doom

The latest first-person shooter from id is a fabulous update to the original game and absolutely worth playing if you want an updated take on one of the most beloved titles of the 1990s. We tested Doom at a single resolution (1080p), but benchmarked it in both OpenGL and Vulkan to measure how AMD and Nvidia would perform in the same title under two different APIs.
Our test sequence is a six-minute battle through The Foundry, starting when the doors open to the level and finishing after the destruction of the third gore nest. There are a number of fire and smoke effects in this level, as well as plenty of demon-slaughtering and dodging around some reasonably open areas.
Doom-1070
In OpenGL, the Radeon Fury X is one of the slowest cards, while it’s obviously the fastest in Vulkan by no small margin. This is partly due to the general condition of AMD’s OpenGL consumer driver, which isn’t as robust as Nvidia’s, but also reflects the improve GPU utilization that AMD gets when operating in a lower-overhead API with support for asynchronous computing.
The performance gains for Nvidia are fairly small — the 1070 is again slightly faster than the GTX 980 Ti, but all of the cards turn in excellent results.

Power consumption

Our power consumption data is gathered from looping Metro Last Light Redux 3x and measuring system power consumption at the wall during the third loop. We’ve been retesting our GPUs with a 750W Antec 80 Plus Gold power supply, rather than the 1200W 80 Plus Thermaltake we used to use, so our figures have changed somewhat since older reviews were published.
First, we’ll look at total power consumption as measured at the wall.
SystemPower-1070
The GTX 1070 doesn’t draw the least amount of power in our tests — the GTX 980 has that honor — but it’s only slightly above the GTX 980, while significantly below both the GTX 980 Ti and the Fury X. What happens when we factor power efficiency into the picture?
WattsPerFrame-1070
We’ve included the R9 Nano in this graph as its AMD’s most power-efficient GPU from the 28nm generation. While the R9 Nano is capable of matching the GTX 980 and 980 Ti, none of the 28nm GPUs hold a candle to the GTX 1070, which offers substantially better efficiency than any other card on the market. We’ll have to wait for Vega, AMD’s next-generation architecture due late this year to see if Team Red can counter Team Green in terms of absolute power efficiency.

Conclusion

The best conclusions pretty much write themselves — and this is one of them. With a top boosted frequency at 1911MHz on our testbed, the Gigabyte GTX 1070 G1 Gaming has some serious legs on it. It’s quiet under load, maintains excellent temperatures, and it’s the most power-efficient GPU we’ve ever tested. Overclocking headroom wasn’t very high, but that’s relative to the maintained frequency of 1911MHz, which was excellent.
If you upgraded to a GTX 980 Ti or Titan X last year, the 1070 isn’t going to be fast enough to interest you, but gamers with lower-end cards, up to and including the GTX 980 can see significant frame rate improvements by upgrading to this card.
GTX1070-3
One thing the GTX 1070 demonstrates is that we’re hitting the end of 1080p as a test resolution for upper-end cards in many cases. While it’s still the most common resolution and therefore important for that reason alone, it’s simply not demanding enough to really show differences between various cards. We’re not ready to pull the curtains on FHD just yet, but the day is coming when high-end GPUs from any company will pack more firepower than the resolution can realistically showcase.
There are just two caveats that we’d like to note about the G1 Gaming GTX 1070. First, our GPU’s boost frequency was very good — so good, in fact, that it may be at the upper end of the curve as far as maximum clock speeds are concerned. Boost 3.0 frequencies aren’t guaranteed by Nvidia, so your mileage will vary depending on the characteristics of your case and cooling.
Second, the G1 Gaming is still selling for $429, a $40 premium over where the GTX 1070 is supposed to sell. It’s possible that these cards will come down more as availability improves, though that might not happen — with no strong competition from AMD at the moment, it’s entirely possible that GeForce prices will remain higher until AMD launches Vega and provides competition for them.
Speaking of AMD, well, we’ve got good news and bad news on that front. If you bought a Fury, Fury X, or Fury Nano last year, the GTX 1070’s performance improvement over those cards isn’t high enough to send most people running to spend more money — but the 1070’s overall strengths put it in a much better position through the end of this cycle. The Fury X’s relatively limited memory pool is due to its use of HBM, but the RX 480 now offers twice the memory at nearly half the price. With 14nm GPUs now available, it’s difficult to recommend 28nm cards as competition.
Unless you’re determined to save $40 or to see what AMD brings to market, we’d go ahead and pull the trigger on this one. You’re unlikely to regret it.


EmoticonEmoticon