Subject: Re: Distributing Classic Seti WorkUnits
From: f/fgeorge
Date: 08/03/2005, 18:40
Newsgroups: alt.sci.seti

On Tue, 08 Mar 2005 18:35:18 +0000, James de Lurker
<jtl2nospamMUNGIEjump@hotmail.com> wrote:

Thanks to Gregor making his setiqueue available I am able to keep
my main 24/7 Linux box occupied with seti WUs once again.

If there is anyone here from the main seti classic server team
itself, or someone that can contact them it leads me to a
community based suggestion. A logical conclusion.

Just as the community processes the units and passes them back,
it might reduce peak loading on a central server if the busiest
community seti farms with queues made them available as part of
a roster, monitoring the cache to ensure that they retained
sufficient for their own needs whilst caching for other regular
users when the central facility was in strife. The admins there
could maybe concentrate on filling nominated distributed caches.

10% of my class are probably single 24/7 professional PC / server
machines with a known regular "consumption", running for years.

Stats analysis on existing users / assignment to caches. Some work
on the server side to automate this perhaps.

Surely a little database work and co-operation could distribute the
load around the set farm queuing community on a more organized basis?

By that I mean loading for exchange of completion data and new units
as well as just crunching!

Berkeley has encourged this all along, well ever since they had some
major outages, someone actually stole the power cable at the Campus
because of the copper wire! They dug up the wire outside and stole
part of it!! Brave souls if you ask me, it was still energized at the
time!
The problem is this currently only works for Classic units, Boinc does
have external caching of units "in the works" but it is not ready for
deployment yet. How far down the line it is, is for someone else to
decide.
There used to a dozen or so "public" caches, my guess is that they
will probably slowly come back on-line when the external caching
becomes a reality.