Because overclocking a computer system is much cheaper than hot rodding a car?
I am having hard time understanding people on network even hardcore overclocking people who attempt to push clocks insane mount or push memory up. Now I do understand the concept why people say things like "CPU clock speed should always be priority 1 when comparing to memory" like in this one page where fellow had half a brain to back his concepts up, but I do not understand doesn't high end hardware owning people have any concept of real overclocking now days and only video we get on youtube is by a fellow who starts the session with "I have never done overclocking before, but...".
Now, things I wanna ask:
* With any Intel CPU chip why would you ever consider raising DDR3 speed over 1866Mhz ? (unless you can change so called Uncore speeds)
Why: I am asking: Doesn't people have decent Elpida Hypers to get Latencies to CL7-7-7-21 or 8-8-8-24 and not ever go tRCD over tCL just to patch few Mhz which in the end is faster than even your high end 2400Mhz DDR3's in action. why do we even consider buying anything like tCL9-tCL10 memory or even accept the idea of countering speed over latencies? (Saying this understanding latency drops over speed of course)
* Why is CPU clock speed more important ? (in 9/10 review, OCing forums, at pretty near anywhere I keep hearing this)
Why: If I would losen up DDR3 latencies and push CPU to 5ghz with reasonable temperatures at OS which is I would say pretty ok, OC you would lose the DDR3 speed up gain by low latencies which is 10 times the speed than 300-400Mhz push to CPU in any real life application or even on OS level, so, I am having hard time understand why would anyone really do this unless they own some Intel backyard last cold bug chip which IMC can't handle decent speeds at CPU.
* PCH voltages: Why is everyone stating "You do not need to higher PCH voltages" this is almost 9/10 cases also.
Why: In my point of view this is the damn only voltage you wanna raise if you actually wanna have DDR/FSB stabilized at edge of performance something VCCSA of course gives Intel CPU IMC better tolerant for high speed low latency RAM modules, but the whole idea of tweaking the FSB up a notch to find the highest FSB/IMC tolerant speed for RAM at low latencies this voltage seems to start working only after you push it higher than 1.140v (X79) maxing out around 1,3v(Intel Documentation of PCH voltage) or even higher no matter what Intel documentations say when doing absolute edge of OC for boards.
Just saying any idiot can push their memory to 2400Mhz with high latencies and then losen up their CPU to run at extreme high speed just because there's less stress to the point of CPU where your actual real life speed for any application/working/OS is, but as far I tested like in hundreds of scenarios this is 100% sure slower concept to have as an overclock than running exactly at point DDR low latencies to edge what Intel CPU IMC can do in really lucky case of 1 of the cores I tested IMC was at, ok, level at X79 and even in that case raising the FSB clock (or what ever the CPU<->memory clock now days is called calling it FSB) so ram gets to the edge of 1866Mhz as low latencies as it goes CPU speed being higher by board FSB it's still the best case scenario than losing the RAM/DDR latencies against higher speed of RAM(lowering latencies by speed) and pushing CPU higher.
Of course this all disregard the Gigabyte Module stability for PCie 3.0, virtualizations bad CPU microcode patch and of course other screw ups that has nothing to do with actual overclocking their boards.
Just weird concept I keep seeing, Sorry for my bad english and the novel above.
Heck, just got to reading GA-X79-UP4 with I7-4820K Uncore / QPI dual speeds. Speed is 3,2Ghz x2 where dual doesn't do damn thing only controls 2 different channels separately not increasing the actual speed needed, so, I wonder where the hell did Intel pull their specification for even 1866Mhz support when QPI is not at least 3,7Ghz since board supports 1600Mhz max it seems unless my HWINFO32-64 and AIDA64 are both wrong of reading it (no wonder the damn RAM OC seemed a bit slow on this board).
Last edited by genetix; 4 Weeks Ago at 09:04 PM.
Because overclocking a computer system is much cheaper than hot rodding a car?
Q9650 @ 3.80GHz [9x422MHz]
P35-DS4 [rev: 2.0] ~ Bios: F14
4x2GB OCZ Reaper PC2-8500 844MHz @4-4-4-10
MSI N460GTX Hawk Talon Attack (1GB) video card <---- SLI ---->
Seasonic SS-660XP2 80 Plus Platinum psu (660w)
WD Caviar Black WD6401AALS 640GB (data)
Samsung 840 Pro 256GB SSD (boot)
SLI @ 16/4 works when running HyperSLI
Cooler Master 120XL Seidon push/pull AIO cpu water cooling
Cooler Master HAF XB computer case (RC-902XB-KKN1)
Asus VH242H 24" monitor [1920x1080]
MSI N460GTX Hawk (1GB) video card
Logitech Z-5500 Digital 5.1 Speakers
win7 x64 sp1 Home Premium
HT|Omega Claro plus+ sound card
CyberPower CP1500PFCLCD UPSE6300 (R0) @ 3.504GHz [8x438MHz] ~~ P35-DS3L [rev: 1.0] ~ Bios: F9 ~~ 4x2GB Kingston HyperX T1 PC2-8500, 876MHz @4-4-4-10
Seasonic X650 80+ gold psu (650w) ~~ Xigmatek Balder HDT 1283 cpu cooler ~~ Cooler Master CM 690 case (RC-690-KKN1-GP)
Samsung 830 128GB SSD MZ-7PC128B/WW (boot) ~~ WD Caviar Black WD6401AALS 640GB (data) ~~ ZM-MFC2 fan controller
HT|Omega Striker 7.1 sound card ~~ Asus VH242H monitor [1920x1080] ~~ Logitech Z-5500 Digital 5.1 Speakers
win7 x64 sp1 Home Premium ~~ CyberPower CP1500PFCLCD U.P.S.
To answer your memory speed vs latency and CPU speed questions, in general:
Low latency settings are not fashionable or impressive anymore, all that matters is the speed. As the latency numbers increased with the speed, it's not as impressive to go from say a tCL of 12 or 14 to 10 or 12. All that matters is 2133, 2400, 2600, etc, in the eyes of today's enthusiasts. When you reduce the tCL down by one or two, and PC no longer starts at 2400, lower latency numbers are soon forgotten... or I should say ignored. People post there memory speed screen shots, but most don't show a memory benchmark along with it.
The same is generally true for high CPU OCs. The CPU clock speed is all that matters to the high OC enthusiasts, not a combination of CPU clock and memory speed. Those two are not directly tied together anymore, since FSB over clocking does not exist anymore, as well as a FSB. The BCLK is the CPU and memory clock, and cannot be increased by 10%, 5% is too much for most boards.
Overall, with the simplification of over clocking, comes a simplification of the results of over clocking. All that is looked at is one factor, clock speed, the end.
Regarding PCH voltage, it is unrelated to memory and BCLK, and FSB no longer exists. The memory controller in on the CPU, IMC is Integrated Memory Controller, integrated on the processor. This is Intel's simple diagram of the CPU, RAM, and PCH interaction, applicable to SB and IVB:
The processor is doing more and more, while the support chip(s) are doing less and less. Intel is down to one support chip, from the two chip "North and South Bridge" chip design, in the generation before SB. Currently the PCH has the SATA interface, some of the PCI-E lanes, on-CPU video interface, networking capability, HD audio, and USB support. QPI is also gone, not needed as an interface between the processor and chipset.
That honestly is perfect answer and my conclusion as well and I was/am simply giving my through of the OCing subject here of course, but all in all the whole idea of "what is fast" is kinda losen up at web overclocking circles.
The idea of actual usably fast concepts like as well as possibly sync memory and sacrificing even the CPU speed to get the actual internal speed up as it was with LGA775>LGA1155>LGA2011>LGA1150 will always result the most out of the box at overclocking in the end with real application performance. I would wish some day to set a benchmark against someone who decides to OC CPU high ignoring memory and internal board bandwidth my machine OC'd with probably even higher clocks in the end low latencies and see what happens. Because every year I take few boards just to see at end of the year (so, called 2nd generation of early year products) and as soon I figure how to actually start increasing internal speeds and where do they actually balance with memory the perfomance resunt is way more than twice what simply raising CPU itself would do even with high clock memory understand latency drop at higher clock speeds.
Heck, if I wouldn't be so eager still to tweak when new stuff comes from the door. I could almost say we are, atm, switching quality products to new corp toilet paper and not now meaning just crappy metal referring now to the point of user demand being weak.
Had to edit about FSB honestly saying I think current situation of QPI is just intel cheap tricks for customers thinking their systems works fast while disabling in a sense user access specifically to it's voltages and also in something like Z68(Z77?) ability to modify the ratio or speed.
Last edited by genetix; 4 Weeks Ago at 07:57 PM.
Thanks for saying my explanation is perfect, although you may think so since I agreed with you on most topics. Some others might not agree.
Regarding the Intel "cheap tricks", there is some truth to that, but I doubt that is the only reason. The difference between earlier Intel architectures and Sandy Bridge is significant, but did Intel make those changes only to restrict the way a processor is over clocked? The first SB processors were a gift IMO, their performance is amazing and surpasses Nehalem processors in almost all aspects, given four core vs four core comparisons.
Checking your system specs, your X79/4820K system is certainly not of the FSB generation architecture.
Your idea of a benchmark comparison between older and new would be interesting. But IMO the new generation Intel processors have improvements over the others that cannot be overlooked, regardless of their architecture.
Well, usually people don't even bother to read "the nonesense" I now days write they ignore them completely as it's totally out of perspective what "real overclockers" on web are telling them which makes singular concept totally wrong on most eyes. I don't think because you agreed the idea of general internal speeds increase to work as backbone for overclocking cpu/memory to be my reason to say it was perfect answer. You explained the idea I was trying to desc. with even picture not even sure did you actually agree the idea I was trying at top post to explain, but your post gave way more detail than I was able to.
However, the idea people on different websites explaining how their PCIe 3.0 doesn't work correctly because of drivers or the idea that system stability is lost after they finish "LinXing.." their systems for 3 hours and when they launch first game which will use cpu, memory and gpu with full board bandwidth and start having issues because the internal speeds can't keep up with all the channels.
(As for my equipment, yeah, I did actually manage to push it to 143Mhz host clocks at ~4,9Ghz """stable""". Problem came along actually only at the CPU_VID not POSTing at x1.25 multiplier for CPU/Memory which negates the idea of power saving while not on use by Intel EIST. I do "still" consider power saving of equipment worth more than actual OC actually todays equipment specially when you have to own personal power plant to run them "running PhysX gpu beside CFX R9 290" )
Current I am testing since I found a loophole at Intel Turbo to push the actual CPU base multiplier to 40 at that host clock which may sound crazy, but that actually increases CPU_VID and then adding the actually Turbo to multiplier 29-30 "which should force the multiplier down", have not tested this, yet..", since working around the clock almost, atm", but if I am correct or well Gigabyte Board 'Auto' voltages showed at x1.25 showed promising results this might actually work to trick the board voltages. Hoping to test this today after work.
yeh, not working higher base multi and then kicking it back by turbo it seels VID is just by lowest multi and fails, but found something else interesting. added CPU gear ratio to x1.25 kicked core to 4,8Ghz and added memory to 2000Mhz at CL 8-8-8-24 seems to give me pretty damn decent speeds.. Was kinda long session on OCing too like 5 hours straihgt burned 1 fan controller on lian-li box ducktaped to back, heh.. Damn they smell nice when cable burns, lol.. ;)
How ever I am getting 134Gflops out of box which at triple channel memory is quite, ok.
I am wondering if a host clock patching from 100.00Mhz would similarly harm the PCIe as it used to..... Specifically PCIe 3.0 being so unstable, anyone got any input on Host clock ?
Last edited by genetix; 4 Weeks Ago at 05:32 AM.
There are currently 1 users browsing this thread. (0 members and 1 guests)