| Subject: Re: coding for performance |
| From: Sharku |
| Date: 16/12/2003, 02:35 |
On Tue, 16 Dec 2003 11:37:41 +1000, "ComputerDoctor"
<davekimble@austarnet.com.au> wrote in
<brlnqo$hif$1@austar-news.austar.net.au>:
Does this mean CLI 3.03 is more efficiently coded than 3.08 ?
I don't know if it's more efficiently coded, but it is more efficiently
compiled, with different compiler optimizations.
As an old programmer in assembly code (remember that? ) as well as in
higher level languages, I well understand the value of using high level
languages to be platform-independent, keep down the cost of development
and on-going maintenance, but when the main objective is to crunch as
many WUs as possible, surely newer versions should always be leaner and
meaner than the old.
Not entirely true, to crunch as many WUs as possible is an important
objective, but the main objective is to get scientifically/mathematically
valid results. A client that churns out 100 WU's an hour on a 486 might
seem like a good thing to have, but not if all the WUs returned are
rubbish. After all, it's a "distributed computing project", not a
"distributed rubbish generation project"; we already have one of those,
it's called usenet. ;)
The S@H team has even gone as far as to implement a "reversed Moore's Law"
in consecutive versions of the client: as PC's got faster, the
mathematical algorithms performed by S@H have become more complex to
detect even more potentially useful signals, older WUs that had already
been crunched have been redistributed to have them crunched with those new
algorithms.
Some have even suggested that the S@H team is artificially keeping the
3.08 slower than it could be because they have an overcapacity of
computing power and they don't want to lose it. When BOINC comes around
they're going to need that power.
Mind you, none of this is official S@H info/policy, it's just what I
picked up from reading this newsgroup.
Sharku
--
Customer: "What does UART stand for anyway??"
Tech Support: "It stands for UART gettin' online"