In infinite wisdom A1ex P1antema answered:
Rich wrote in message <40A3C634.7000906@somewhere.com>...
This is simply not true. Since 1970 or so, processor speeds have
gone from 4 MHZ to 4,000 MHZ, that's a thousand times faster in
over 30 years. But computers are not 1,000 times faster as memory
has only gotten ten times faster, and everything else is dig slow.
Computers are faster than they were, but they are nowhere near
a thousand times faster.
Processor speeds have grown much more than thousand times in these 30 years.
An example:
A Pentium with its 32 bit databus can move 4 bytes per clock period.
I have a 4 MHz Z80 computer built in 1985, which has a block move instruction as well.
It takes 21 clock periods to move one byte.
It's so sad that they never used the Z-80 duplicate register set
with CPM. I understand that ZCPR used them. Bet it made a big
speed difference. Even then I recall that wordstar was a speed
daemon on my 4MHZ Applicard (with 64K of memory on board, WOW).
Nonetheless, I don't think most productivity software is (or was)
limited by block moves.
And unlike modern computers, it doesn't have any cache memory.
Why would it need cache? The memory ran at full processor speed. Cache
is used to buffer things of vastly different speeds, like disk blocks
for disk cache, and memory for processor cache. This is because the
modern processor runs 20x faster than the memory bus. Without the
cache the processor would spend most of it's time waiting for memory
access. With cache it runs at full speed for a large percentage of the
time, bur it still must wait for cache to fill whenever non-cached
memory is accessed.
Cache is not a good thing WRT computer speed. It's a sign that your
computer is running slower than it's processor.
Compiling a 3000 line program took 5 minutes,
the same job on a Pentium 133 built in 1986 takes less than 0.5 seconds,
i.e. a factor 600 in 11 years.
You've tried?
Nonetheless, I suggest that there may be different factors at work, such
as more of the compiler being loaded into memory rather than being
loaded as needed from disk. I've gotten a three order magnitude speed
increase in an application I wrote way back in 1995 or so, on a 143
MHZ UltraSparc. Same machine, same everything. When I originally wrote
the code, I mimicked what had been previously by hand. Next few updates
I merely made the coding more efficient (cut it down from 6 hours for
a quarter's data too maybe 2). By then it occurred to me that I was
doing it all wrong, and that I could redistribute the tasks more
efficiently and do some of the processing in the sql side. After the
4th re-write, I went out for coffee after starting it running. It was
done before I got back. I thought it had bombed, or failed to run
(sql server down or somesuch). But I was wrong, it ran and ran
correctly. Next run I timed it, don't recall the time, but it was
less than 20 seconds. So we have from 6hours+ -> 20 sec, three orders
of magnitude speed increase with the exact same hardware.
The compiler writers have been doing the same with their compilers,
and systems have tons more resources than they did back in 133MHZ
days.
That is, it's not at all clear that the differences were the result
of processor speed increases. These kinds of speed increases can
often result from writing good code (as opposed to today's 'get it out
the door on the planned ship date whether it's ready or not' mindset).
You want to compare system speeds you need to check some benchmarks.
And even these can be fooled if the entire benchmark fits in cache.
Look at the difference a large cache makes for SETI, everything else
being equal. This is why streaming video skips and breaks on a modern
2GHZ celeron, only 128K of cache, the system is limited to memory
speed for most things.
Or better yet, compile the same code with the same compiler on both
machines. That would give a more reasonable comparison.
Rich
Alex.