Radeon X1050 

Released January, 2008
  • 400 MHz
  • 128 MB

As one would reasonably expect, with their desktop processors configured for very high CPU performance and much more limited GPU performance, Intel is the least CPU bottlenecked in the first place.
by Ryan-Smith (Feb, 2015)
We see a move from 64-bit wide LPDDR3 to 64-bit wide LPDDR4 on the memory interface, which improves peak memory bandwidth from 14.9 GB/s to 25.6 GB/s and improves power efficiency by around 40.
by Joshua-Ho (Jan, 2015)

Benchmarks Real world tests of the Radeon X1050

PassMark Industry standard benchmark for overall graphics card performanceData courtesy Passmark

Radeon X1050
49

Features Key features of the Radeon X1050

effective memory clock speed Frequency at which memory can be read from and written to

Radeon X1050
400 MHz
Radeon HD 6950
5,200 MHz
GeForce GT 730
5,012 MHz

memory bandwidth Rate at which data can be read from or stored in onboard memory

Radeon X1050
3.2 GB/s
Radeon HD 6950
166.4 GB/s
GeForce GT 730
40.1 GB/s

pixel rate Number of pixels a graphics card can render to the screen every second

Radeon X1050
3.2 GPixel/s
Radeon HD 6950
28.16 GPixel/s
GeForce GT 730
7.22 GPixel/s

texture rate Speed at which a graphics card can perform texture mapping

Radeon X1050
3.2 GTexel/s
Radeon HD 6950
77.4 GTexel/s
GeForce GT 730
14.43 GTexel/s

texture mapping units Built into each gpu, these resize and rotate bitmaps for texturing scenes

Radeon X1050
8

render output processors GPU commponents responsible for transform pixels as they flow between memory buffers

Radeon X1050
8

In The News From around the web

27
Sep

NVIDIA Publishes DirectX 12 Tips for Developers

by Scott Michaud |
www.pcper.com
Drivers would play around with queuing them and manipulating them, to optimize how these orders are sent to the graphics device, but the game developer had no control over that.
Pretty much all of them apply equally, regardless of graphics vendor, but there are a few NVIDIA-specific comments, particularly the ones about NvAPI at the end and a few labeled notes in the “Root Signatures” category.
When it comes to the GPU-based head tracking, AMD is using its GCN architecture to "ensure [the] most up-to-date head tracking inputs are used for VR rendering and image/time warp.
AMD was quick to point out that its GCN architecture, which is found in everything from tablets right up to the latest and greatest gaming PCs, is ready for DirectX 12.
4
Mar

GDC 15: PhysX Is Now Shared Source to UE4 Developers

by Scott Michaud |
www.pcper.com
This means that you can make legitimately free (no price, no ads, no subscription, no microtransactions, no Skylander figurines, etc.) game in UE4 for free now!
NVIDIA and Epic Games have just announced that Unreal Engine 4 developers can view and modify the source of PhysX.
As one would reasonably expect, with their desktop processors configured for very high CPU performance and much more limited GPU performance, Intel is the least CPU bottlenecked in the first place.
As long-time readers may recall from our look at Intel’s Gen 7.5 GPU architecture, Intel scales up from GT1 through GT3 by both duplicating the EU/texture unit blocks (the subslice) and the ROP/L3 blocks (the slice common).

Specifications Full list of technical specs

gpu

GPU brand ATI
GPU name RV410
Market Desktop
Clock speed 400 MHz
Is dual GPU No
Reference card None

raw performance

Texture mapping units 8
Render output processors 8
Pixel rate 3.2 GPixel/s
Texture rate 3.2 GTexel/s

memory

Memory clock speed 200 MHz
Effective memory clock speed 400 MHz
Memory bus 64 bit
Memory 128 MB
Memory bandwidth 3.2 GB/s

noise and power

TDP 24W
Report a correction

Comments

Showing 1 comment.
Will this do 4k?
comments powered by Disqus