![]() |
CPU Multithreading please.
I'm playing COD right now, at an unconfortable 22-30fps.
My GPU is a 6950@920MHz 2GB, BUT the game is using only 66% of the GPU capability. (As reported by GPU-Z and AtiTrayTools) I had a 6850 1GB before and I had no FPS improvement when I got the new card. My CPU is a Phenom 1090T X6, but the game uses only 1.5 core at most. So I assumed the Sim is lagging at the CPU, as it's uncapable of using more than 1.5 core. I put the plane in autopilot over the sea and logged the FPS: CPU@3200MHz = 29FPS CPU@3700MHz(+15%) = 33FPS (+14%) CPU@4100MHz(+28%) = 37FPS (+27%) So, the big problem is the CPU and the fact that the main thread runs on only 1 core. If they could mutithread it, everybody using 2+ core CPU would have a massive FPS improvement. |
Quote:
|
and your setting of ProcessAffinityMask= is?????
|
Quote:
AffinityMask is only to choose which core you want to use, not to use more than one. |
Quote:
Process Affinity = =1 - core 0 =2 - core 1 =3 - core 0+1 =4 - core 2 =5 - core 0+2 =6 - core 1+2 =7 - core 0+1+2 =8 - core 3 =9 - core 0 + 3 =10 - core 1 + 3 =11 - core 0 + 2 + 3 =12 - core 2 + 3 =13 - core 0 + 2 + 3 =14 - core 1 + 2 + 3 =15 - core 0 + 1 + 2 + 3 PS. and make sure that you remove the ; sign in front of the processaffinity= line in your conf.ini (just in case) ~S~ |
Now the question is does that actually work?
I know that in IL2 regardless of the above assignment it would only use 1 core. Has multicore support been built into CoD? I guess Ill fire it up and check out my core usage. We shall see... |
Here in my default conf.ini i don't have this setting anymore.
All setings i use now, same result: Launcher.exe spread over my 4 cores. |
Quote:
Sorry, I assumed that because that's how it usually works in linux daemons. My conf.ini does not have that option... Well I tried using it, and the difference I noticed is that now instead of having 1 full used core, and half another, I got 4 evenly distributed cores at 30% load. So, as I imagined, the game itself runs on only 1 thread, there is no way to make it use more right now. I think there are additional threads for terrain loading, but that doesn't improve your normal FPS while there is no terrain being loaded. |
It is my understanding that process affinity was originally included in the config but has been removed quite a while ago with subsequent patches.
At some point I added the line back in with a value of 15 and performance was not only not better it was worse on my comp. I do agree with the game benefitting from increased cpu performance/overclocking, I have the same experience here with a much better fps rate when I run the game with my cpu overclocked ( right now up to 4.1ghz ). At the same time game performance imo is as well influenced by how the ram is configured upon cpu overclocking. My so far best playability I think I get when I have a mild overclock (3.8ghz) on the cpu just with the fsb while having the ram increased at the same time. |
I guess the code is not finished yet...
|
Quote:
This implies that an increase in cpu speed alone will not noticeably improve the frame rates in CloD, but an increase in the data through-put capability certainly does, at least on my system. Perhaps what this sim really needs is an Intel hexa-core with tri-channel memory, or even an Opteron FX with quad-channel memory … And the ‘process affinity mask’ setting is no longer in the config file, not that fiddling with the settings yielded any gain in my case either. |
Got my fsb currently at 224, so pretty much the same.
Tony, how is that video card working for you? Just asking since I am considering to maybe switch to amd, even though the vast majority of gamers are using nvidia. But I am kind of planning to stay with amd pcu wise. On the other hand I think the new chipsets, like you have, officially now support both, xfire and sli, ahhh decisions, decisions.....:grin: |
Hi Ned
I’ve used Ati cards for my own machines since the 8500, and have always been satisfied with them. I have had a fair amount of experience with nVidia cards as well in various builds for others, and to be honest, there’s not a lot of difference between the two. I’ve always preferred Ati because of their more efficient approach (which typically results in better pricing), but few users would be aware or even care about the architecture of whatever card they may have, as long as it works OK. In the end it boils down to personal preference, and it’s usually easier to decide on a product that’s familiar. Being an AMD fan as well makes the choice somewhat easier for me since they acquired Ati, but you are right about the 990 chipset – my board supports Sli too, so you are no longer forced to buy Intel if you want to go this route. As far as the 6970 goes, its performance generally falls between the GeForce 570 and 580, and locally it’s priced a fair bit cheaper than the 570, so a no-brainer in my case. My son has a MSI HD5870 Lightning I’ve used for comparison, and it is slightly faster than his in DX9 games (which are most modern games), but does allow higher detail settings in CloD due to the additional memory, which is what I was interested in. It is a fair bit longer than the 5870 (and my previous 5850), which required me to mod my case in order for it to fit, but at least the power requirement isn’t any greater. In my opinion, you won’t really experience a huge increase in frame rates over your 470 in CloD, although you will probably be able to turn some detail up without suffering any loss (probably not what you wanted to hear if you were intending buying a new card :-P) I replaced my 5850 because my daughter needed a new card (her 512MB 3870 was getting a bit long in the tooth), rather than wanting more performance. If I had done it for solely for more speed I’d have been disappointed, but that is always the case when replacing a high end card with a newer model. I hope this has been somewhat helpful. |
Very helpful, thanks :grin:.
The 6970 looks really intersting right about now, bang for the buck. Increase in framerates is one thing, maybe I won't be getting much over my 470, but the big thing is that more and more games are quite vram hungry, so a reason to buy a new card would be mostly more available vram. Two other things that concern me are: Would it pose any problem to switch from nvidia to amd, any driver issues, uninstalling necessary or operating system wise? There is a new standard for pcie x16 coming, 3.0, so far I have only seen intel mobo's with future support for that (when Ivybridge comes out). Have you seen anything from amd regarding this? |
Switching to a different chipset graphics card? – I would use DriverCleaner to completely remove any trace of the previous drivers before installing the new ones. I had to do this when changing from my 5850 to the 6970 due to Win7 intelligently (!) deciding to keep the previous clock profiles.
PCI-ex 3.0? – this is apparently due on mainboards towards the end of the year, and AMD 7xxx cards are rumoured to be compliant (which should be out around December this year) as is Intel’s IvyBridge (which is rumoured to be out Q2 2012). This will allow a greater band width than is currently available, but is this a huge issue? – it’s not as if the current version 2.0 has reached saturation. And the new standard is apparently backwardly compatible with version 2.0 so there shouldn’t be any hardware conflicts. The trouble is if you decide to wait for the next rumoured hardware standard, you will never purchase, as there is always something bigger-better-faster-more on the way. |
I know, you wait and as soon as you think you made up your mind about a purchase something else comes popping up.
With the pcie x16 3.0 I meant more if you maybe came across any info about amd mobos. Eventually they will have to implement it, but as of right now I can only find intel mobos with a future 3.0 compatibility. I think though that at this point, since I can run things very nicely with my current comp, waiting may be an option due to a lot of new things coming out. At least waiting until the end of the year. |
Quote:
There is virtually no difference as show in numerous benchmarks between running for instance DDR2 at 400Mhz with 5,5,5,15 timing or 533Mhz at 5,6,6,18. The main advantage of DDR 3 is the in increase data path of 240 pins (not all of which are used for data) as apposed to 184 pin DDR 2. Couple that with increased bus width between say an AM2 to a AM3 or comparable intel processor comparison last generation to current and the performance increase is noticeable. But don't be fooled by the marketing hype that higher clock frequencies automatically translate into big or any performance gains. The main thing to look at is the quality of the memory and how tight you can get the timing at any frequency. If you spare your memory the overclock and tighten up the timing as much as you can while maintaining stability, the benchmarks would and have shown that there is virtually no difference in gaming performance or otherwise. The overclock will only generate more heat and electrical stress for no gain when compared to the tighter timing scheme. Quote:
|
Funny thing about the timing with my memory.
It is a 1600mhz memory, the motherboard of course defaults it at 1333. But in the SPD of the memory the timings for 1600 are tighter than for 1333, at least according to cpuid/cpu-z. So with cpu fsb overclocking I also tighened the memory timings to the ones indicated by the 1600mhz SPD, though the memory is not running at 1600. It's running at 1500..ish, due to fsb frequency of 224 and starting out with the default of 1333. Somewhere I read that AMD likes tighter timings anyway while Intel seems to be better with looser timings. |
Some TEST
hi,
here are some test I've done looking to understand what influences the IL2 COD performances: These are the commons options for all test: http://img828.imageshack.us/img828/3...onsettings.jpg By desigabri at 2011-11-12 notes: - The HD6950 is a 2GB version tweaked up to a HD6970 - The CPU is aircooled with an A70 Corsair (hardly fitted into the cabinet :(:(:( ) - DDR3 16GB Vengeance 1600Mhz - no windows paging file - Game loaded onto a 6GB ramdisk to slowdown the game loading times and the missions loading times (and ingame objects during the fly) - Test made playing the "Black Death" Track and using the FPS SHOW START internal option - Radio voice comunication OFF in audio menu settings - Common game settings: http://img851.imageshack.us/img851/8093/maxlow.jpg By desigabri at 2011-11-12 overclock 1: http://img210.imageshack.us/img210/5139/cpucloktest.jpg By desigabri at 2011-11-12 this test seems to show that cpu overclock benefits ingame fps overclock 2: http://img507.imageshack.us/img507/6...ucloktest2.jpg By desigabri at 2011-11-12 this test seems to comfirm the previus one and show how heavy is the shadows option GPU overclock: http://img824.imageshack.us/img824/6...ttingstest.jpg By desigabri at 2011-11-12 this one should show that GPU overclock doesn't help (???!) affinity: http://img194.imageshack.us/img194/9...finitytest.jpg By desigabri at 2011-11-12 assigning processes to one, two, theree, four and six cores... - See that starting from 2 cores and up, launcher.exe doesn't change performances. - The best system seems to be a dual core processor that gets best results for FPS min. - A one core CPU isn't able to play as the others other settings: http://img819.imageshack.us/img819/2...ttingstest.jpg By desigabri at 2011-11-12 you can't see here performance changes working on minor settings or trying to use priority process assignements power boost: http://img832.imageshack.us/img832/6060/driverboost.jpg By desigabri at 2011-11-12 same conclusion for catalyst boost settings very low: http://img209.imageshack.us/img209/8...sonverylow.jpg By desigabri at 2011-11-12 very low settings helps a lot for FPS results, anyway you can see that "effects" are very heavy here, also if setted to MEDIUM for FPS min repeatable: http://img580.imageshack.us/img580/7...repeatibil.jpg By desigabri at 2011-11-12 the last comparision is made only to see that comparing test made in different times but having the same options, get same results. Before this test I believed it is more important the GPU overclok then the CPU overclock (for ArmA2 I got that conclusion). For IL2 CoD seems the opposite. |
code still needs a lot of attension for performance
sure they are hard at work for a big frame rate performance boost. |
All times are GMT. The time now is 07:46 AM. |
Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
Copyright © 2007 Fulqrum Publishing. All rights reserved.