GPUBoss Review Our evaluation of R9 380 vs R9 280X among Desktop GPUs


Battlefield 3, Battlefield 4, Bioshock Infinite and 21 more


T-Rex, Manhattan, Cloud Gate Factor, Sky Diver Factor and 1 more


Face Detection, Ocean Surface Simulation and 3 more

Performance per Watt

Battlefield 3, Battlefield 4, Bioshock Infinite and 32 more


Battlefield 3, Battlefield 4, Bioshock Infinite and 32 more

Noise and Power

TDP, Idle Power Consumption, Load Power Consumption and 2 more


Overall Score

AMD Radeon R9 380 

GPUBoss recommends the AMD Radeon R9 380  based on its benchmarks and noise and power.

See full details

Cast your vote Do you agree or disagree with GPUBoss?

Thanks for adding your opinion. Follow us on Facebook to stay up to date with the latest news!

Differences What are the advantages of each

Front view of Radeon R9 380

Reasons to consider the
AMD Radeon R9 380

Report a correction
Much better 3DMark 11 graphics score 31,844 vs 10,792 Around 3x better 3DMark 11 graphics score
Much better 3DMark vantage graphics score 31,181 vs 11,255 More than 2.8x better 3DMark vantage graphics score
More memory 4,096 MB vs 3,072 MB Around 35% more memory
Slightly higher clock speed 918 MHz vs 850 MHz Around 10% higher clock speed
Much higher crysis 3 framerate 39.4 fps vs 13.1 fps More than 3x higher crysis 3 framerate
Higher battlefield 4 framerate 59.5 vs 39.9 Around 50% higher battlefield 4 framerate
Higher BioShock infinite framerate 80.8 fps vs 73.9 fps Around 10% higher BioShock infinite framerate
Lower TDP 190W vs 250W Around 25% lower TDP
Front view of Radeon R9 280X

Reasons to consider the
Generic Radeon R9 280X

Report a correction
Much better 3DMark06 score 28,452 vs 12,191 More than 2.2x better 3DMark06 score
Much higher memory bandwidth 288 GB/s vs 176 GB/s Around 65% higher memory bandwidth
Better floating-point performance 4,096 GFLOPS vs 3,290 GFLOPS Around 25% better floating-point performance
Higher effective memory clock speed 6,000 MHz vs 5,500 MHz Around 10% higher effective memory clock speed
Higher texture rate 128 GTexel/s vs 102.8 GTexel/s Around 25% higher texture rate
Wider memory bus 384 bit vs 256 bit 50% wider memory bus
More shading units 2,048 vs 1,792 256 more shading units
More texture mapping units 128 vs 112 16 more texture mapping units
Better sky diver factor score 380.99 vs 353.3 Around 10% better sky diver factor score
Better PassMark direct compute score 3,677 vs 2,938 More than 25% better PassMark direct compute score
Better bitcoin mining score 468.39 mHash/s vs 410.27 mHash/s Around 15% better bitcoin mining score
Slightly more compute units 32 vs 28 4 more compute units
Higher memory clock speed 1,500 MHz vs 1,375 MHz Around 10% higher memory clock speed

Benchmarks Real world tests of Radeon R9 380 vs 280X

Bitcoin mining Data courtesy CompuBench

Radeon R9 380
410.27 mHash/s
Radeon R9 280X
468.39 mHash/s

Face detection Data courtesy CompuBench

Radeon R9 380
84.59 mPixels/s
Radeon R9 280X
95.62 mPixels/s

T-Rex (GFXBench 3.0) Data courtesy CompuBench

Radeon R9 380
Radeon R9 280X

Manhattan (GFXBench 3.0) Data courtesy CompuBench

Radeon R9 380
Radeon R9 280X

Fire Strike Factor Data courtesy FutureMark

Sky Diver Factor Data courtesy FutureMark


Battlefield 4


Showing 25 comments.
Good to know! What I actually means is it can't make the VRAM goes 8GB on crossfire mode. And I'm sure when it's max 100% on both card, 4gb VRAM can be utilize.
thats bs it will just copy the information in both memorys so they can work at the same time
It'll be bottlenecked by the processor. Sure, you can reuse it in a new system, but not everyone will buy a new graphics card every 1-2 years.
You say all these things most would consider lies I will just say inaccurate facts you are never wasting a graphics card unless it is not in you Pc..............
And why is that? If you're using either card in anything older than a Core2 Quad or Phenom X4, you're wasting your money.
and then your expectations would be wrong
I wouldn't be expecting many people to be using either of these cards in an older computer, and certainly anything released in the past 4-5 years wouldn't have an issue with either of them.
Depends on the Machine this is only true for machines with processors and ram up to the task in an older machine it is better to increase the throughput than the speed at which the data is... Example you have two lanes of traffic more cars can use the road when you take that same road and close down one lane but increase the speed limit that lane will not cycle more traffic because if there is a crash you still have another lane open where as the single lane of traffic will stop until the path is unblocked same is true in this case some data such as online data takes longer to access than meta data and therefore increases processing time exponentially now if you were able to still send the data even while other data is being allocated that makes your computer faster overall especially at multitasking making low bus width cards viable but also making them disadvantaged when it comes to heavy data processing/Multitasking they just cant compete.
Bus width really doesn't matter. Memory speed can offset the lower bus width. Sending 10 chunks of data at 100 mph is exactly the same as sending 5 chunks at 200 mph. Half the bus width, double the memory frequency.
No Ati/Amd I have always liked because of their honesty it seems Nvidia has been intentionally nerfing their cards to keep on par with the progress rate of AMD but who is to say they dont have a Ace up their sleeve However the Next line of ards from both Manufacturers should be the most competitive lineup ever I cant wait they need toup the bus width my Pny gtx 260 XLR had 448 bit width where as the new cards are 256 bit from Nvidia lame Nerf
from what I've seen, non-nVidia branded games are pretty on par with AMD's lineup. But when DX12 arrives, from what we've seen, the R9's are wrecking nVidia, and AMD CPU's pull much, much closer in performance to Intel's. I admit I laughed every time AMD said "our parts just arent being utilized properly", sounding like a child losing in a game... but then DX12 and Mantle came around and... I guess AMD wasn't actually lying.
runnin a 380 here... getting 60fps for ALL games I've tested at 1080p... so.... yeah... yeah it can... except Crysis 3 on highest settings, but as usual the Crysis series is a GPU wrecker. However I will agree that 4K is too much for it, but 4K is too much for pretty much every single card except the latest Titan
Sorry, but you are wrong. Normal RAM is written and read from constantly. VRAM, however, is much faster, with the expectation that it will be read from and wrote to millions if not billions of times every second.
Crossfire/SLI can't use both VRAM on both cards. So one VRAM on first card only be used. In this case. it can use 4GB of the cards. Don't try to spin by saying 1 powerful card is more better or that DirectX 12 can use both VRAM. I'm merely pointing out that it can utilize the 4gb VRAM on certain condition.
<a href="">mutilate a doll 2</a> Radeon R9 280X is much more better than 380!
directX 12 games will not see the light soon maybe q1 2017
380 for 4k that is a big lie 380 cannot even get 60 FPS on 1080P for most games
that's not true i have tested the 280x and 380 in multiple game fallout 4 assassins creed syndicate mad max and the 280x is faster than 380 by 10 to 15% ( cpu 4770k ram 16gb ddr3 1600 ) all at 1080P ultra expat assassins syndicate need 4 gb of vram to open ultra
It can utilise all of the ram, because what it loads stays for a while. If this was the case, regular ram would be useless because it is too slow, but thats obviously not the case.
If anything, it will handle 4 GB a lot better than a GTX 950. That didn't stop NVIDIA.
prove it.
You don't listen to NVIDIA, and you buy whichever you can get. All GCN cards, including Kaveri APUs will get a DirectX 12 update. The 380 is a rebranded 285, but it's given a slight clock increase, so it sits just above.
I was replying to Sebastian xD
funny how you don't mention a single fucking thing to prove your point, I just burned brain cells reading your shit. If you can't read the graphics that show how the 280X has higher floating point performance and bigger memory bus then you have a few extra chromosomes and shouldn't be posting comments on the internet
The 970 actually has a lower floating point value its about 1.5x the R9 280x in 3dmark yet in games they both trade blows in fact in crysis 3 its about 2x better framerate as a gtx 970 the main issue is Nvidia lied about the card its sad that they felt the need to hide the fact its trully a hardware limited 3.47gb card lame o well at least AMD doesnt do that and I had a PNY GTX 260 XLR8 that I just upgraded from to the R9 280x and it is a beast especially for $150 Interesting fact it is the Msi Gaming edition and it shows up as HD 7970 lolz in Gpuz lolz turns out it is the same as a HD 7970 just different drivers and branding lolz Much like the R9 380 is just a R9 285 lolz
comments powered by Disqus