![]() |
Quote:
This implies that an increase in cpu speed alone will not noticeably improve the frame rates in CloD, but an increase in the data through-put capability certainly does, at least on my system. Perhaps what this sim really needs is an Intel hexa-core with tri-channel memory, or even an Opteron FX with quad-channel memory … And the ‘process affinity mask’ setting is no longer in the config file, not that fiddling with the settings yielded any gain in my case either. |
Got my fsb currently at 224, so pretty much the same.
Tony, how is that video card working for you? Just asking since I am considering to maybe switch to amd, even though the vast majority of gamers are using nvidia. But I am kind of planning to stay with amd pcu wise. On the other hand I think the new chipsets, like you have, officially now support both, xfire and sli, ahhh decisions, decisions.....:grin: |
Hi Ned
I’ve used Ati cards for my own machines since the 8500, and have always been satisfied with them. I have had a fair amount of experience with nVidia cards as well in various builds for others, and to be honest, there’s not a lot of difference between the two. I’ve always preferred Ati because of their more efficient approach (which typically results in better pricing), but few users would be aware or even care about the architecture of whatever card they may have, as long as it works OK. In the end it boils down to personal preference, and it’s usually easier to decide on a product that’s familiar. Being an AMD fan as well makes the choice somewhat easier for me since they acquired Ati, but you are right about the 990 chipset – my board supports Sli too, so you are no longer forced to buy Intel if you want to go this route. As far as the 6970 goes, its performance generally falls between the GeForce 570 and 580, and locally it’s priced a fair bit cheaper than the 570, so a no-brainer in my case. My son has a MSI HD5870 Lightning I’ve used for comparison, and it is slightly faster than his in DX9 games (which are most modern games), but does allow higher detail settings in CloD due to the additional memory, which is what I was interested in. It is a fair bit longer than the 5870 (and my previous 5850), which required me to mod my case in order for it to fit, but at least the power requirement isn’t any greater. In my opinion, you won’t really experience a huge increase in frame rates over your 470 in CloD, although you will probably be able to turn some detail up without suffering any loss (probably not what you wanted to hear if you were intending buying a new card :-P) I replaced my 5850 because my daughter needed a new card (her 512MB 3870 was getting a bit long in the tooth), rather than wanting more performance. If I had done it for solely for more speed I’d have been disappointed, but that is always the case when replacing a high end card with a newer model. I hope this has been somewhat helpful. |
Very helpful, thanks :grin:.
The 6970 looks really intersting right about now, bang for the buck. Increase in framerates is one thing, maybe I won't be getting much over my 470, but the big thing is that more and more games are quite vram hungry, so a reason to buy a new card would be mostly more available vram. Two other things that concern me are: Would it pose any problem to switch from nvidia to amd, any driver issues, uninstalling necessary or operating system wise? There is a new standard for pcie x16 coming, 3.0, so far I have only seen intel mobo's with future support for that (when Ivybridge comes out). Have you seen anything from amd regarding this? |
Switching to a different chipset graphics card? – I would use DriverCleaner to completely remove any trace of the previous drivers before installing the new ones. I had to do this when changing from my 5850 to the 6970 due to Win7 intelligently (!) deciding to keep the previous clock profiles.
PCI-ex 3.0? – this is apparently due on mainboards towards the end of the year, and AMD 7xxx cards are rumoured to be compliant (which should be out around December this year) as is Intel’s IvyBridge (which is rumoured to be out Q2 2012). This will allow a greater band width than is currently available, but is this a huge issue? – it’s not as if the current version 2.0 has reached saturation. And the new standard is apparently backwardly compatible with version 2.0 so there shouldn’t be any hardware conflicts. The trouble is if you decide to wait for the next rumoured hardware standard, you will never purchase, as there is always something bigger-better-faster-more on the way. |
I know, you wait and as soon as you think you made up your mind about a purchase something else comes popping up.
With the pcie x16 3.0 I meant more if you maybe came across any info about amd mobos. Eventually they will have to implement it, but as of right now I can only find intel mobos with a future 3.0 compatibility. I think though that at this point, since I can run things very nicely with my current comp, waiting may be an option due to a lot of new things coming out. At least waiting until the end of the year. |
Quote:
There is virtually no difference as show in numerous benchmarks between running for instance DDR2 at 400Mhz with 5,5,5,15 timing or 533Mhz at 5,6,6,18. The main advantage of DDR 3 is the in increase data path of 240 pins (not all of which are used for data) as apposed to 184 pin DDR 2. Couple that with increased bus width between say an AM2 to a AM3 or comparable intel processor comparison last generation to current and the performance increase is noticeable. But don't be fooled by the marketing hype that higher clock frequencies automatically translate into big or any performance gains. The main thing to look at is the quality of the memory and how tight you can get the timing at any frequency. If you spare your memory the overclock and tighten up the timing as much as you can while maintaining stability, the benchmarks would and have shown that there is virtually no difference in gaming performance or otherwise. The overclock will only generate more heat and electrical stress for no gain when compared to the tighter timing scheme. Quote:
|
Funny thing about the timing with my memory.
It is a 1600mhz memory, the motherboard of course defaults it at 1333. But in the SPD of the memory the timings for 1600 are tighter than for 1333, at least according to cpuid/cpu-z. So with cpu fsb overclocking I also tighened the memory timings to the ones indicated by the 1600mhz SPD, though the memory is not running at 1600. It's running at 1500..ish, due to fsb frequency of 224 and starting out with the default of 1333. Somewhere I read that AMD likes tighter timings anyway while Intel seems to be better with looser timings. |
Some TEST
hi,
here are some test I've done looking to understand what influences the IL2 COD performances: These are the commons options for all test: http://img828.imageshack.us/img828/3...onsettings.jpg By desigabri at 2011-11-12 notes: - The HD6950 is a 2GB version tweaked up to a HD6970 - The CPU is aircooled with an A70 Corsair (hardly fitted into the cabinet :(:(:( ) - DDR3 16GB Vengeance 1600Mhz - no windows paging file - Game loaded onto a 6GB ramdisk to slowdown the game loading times and the missions loading times (and ingame objects during the fly) - Test made playing the "Black Death" Track and using the FPS SHOW START internal option - Radio voice comunication OFF in audio menu settings - Common game settings: http://img851.imageshack.us/img851/8093/maxlow.jpg By desigabri at 2011-11-12 overclock 1: http://img210.imageshack.us/img210/5139/cpucloktest.jpg By desigabri at 2011-11-12 this test seems to show that cpu overclock benefits ingame fps overclock 2: http://img507.imageshack.us/img507/6...ucloktest2.jpg By desigabri at 2011-11-12 this test seems to comfirm the previus one and show how heavy is the shadows option GPU overclock: http://img824.imageshack.us/img824/6...ttingstest.jpg By desigabri at 2011-11-12 this one should show that GPU overclock doesn't help (???!) affinity: http://img194.imageshack.us/img194/9...finitytest.jpg By desigabri at 2011-11-12 assigning processes to one, two, theree, four and six cores... - See that starting from 2 cores and up, launcher.exe doesn't change performances. - The best system seems to be a dual core processor that gets best results for FPS min. - A one core CPU isn't able to play as the others other settings: http://img819.imageshack.us/img819/2...ttingstest.jpg By desigabri at 2011-11-12 you can't see here performance changes working on minor settings or trying to use priority process assignements power boost: http://img832.imageshack.us/img832/6060/driverboost.jpg By desigabri at 2011-11-12 same conclusion for catalyst boost settings very low: http://img209.imageshack.us/img209/8...sonverylow.jpg By desigabri at 2011-11-12 very low settings helps a lot for FPS results, anyway you can see that "effects" are very heavy here, also if setted to MEDIUM for FPS min repeatable: http://img580.imageshack.us/img580/7...repeatibil.jpg By desigabri at 2011-11-12 the last comparision is made only to see that comparing test made in different times but having the same options, get same results. Before this test I believed it is more important the GPU overclok then the CPU overclock (for ArmA2 I got that conclusion). For IL2 CoD seems the opposite. |
code still needs a lot of attension for performance
sure they are hard at work for a big frame rate performance boost. |
All times are GMT. The time now is 07:03 PM. |
Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2007 Fulqrum Publishing. All rights reserved.