Order Tray | Contact Us | Home | SIG Lists

[aprssig] Re: FindU Maps Quirky

Gerry Creager N5JXS gerry.creager at tamu.edu
Thu Feb 10 03:10:40 UTC 2005


Apologies.  Microsoft apparently decided the Beowulf name had enough 
recognition to snag a similar name.

Those of us who've been involved in embarassingly parallel 
supercomputing have had our run-ins with Microsoft, where they tried to 
tell us they'd fix all our problems.

Since I decided I didn't need the blue-screens, and didn't need to 
reboot that frequently, I tended to ignore 'em.

Microsoft may (and that's a gamble), have figured out how to do load 
sharing and redundancy.  I'd not go so far as to append ", etc.," as 
they're NOT high-availability and are just barely, with .NET and XP, 
able to handle computational clustering.

Most of the web-based mapping runs on Linux.  FindU runs on Linux.  They 
are not likely to run cleanly... or reliably... on Windows.

I tend to budget a cluster upgrade every three years.  We can rarely get 
funding to upgrade more frequently, and for computational systems, 
mixing and matching is not a great idea.  That's the arena I live in.  I 
don't do, nor do I recommend, office systems in either context.  My 
desktop and my laptop are replaced when the warranty runs out.  Of 
course, my desktop probably spends more time doing geodetic data 
reduction or new numerical model runs than most systems around here...

With Findu2, and first.aprs.net, your 5-year model will probably 
suffice.  However, I also am managing power conditioning and thermal 
conditioning on 'em.  We run periodic diagnostics and I try to fix them 
before they die.  For first.aprs.net, we're running SMART diagnostics. 
The older SCSI drives in FindU2 don't offer that benefit.  We simply 
test and FSCK periodically.

For what it's worth, the database aspects of James' operations are more 
intensive than displaying a shapefile that's preexisting.  That's what 
will drive server upgrades and replacements.

gerry


KC2MMi wrote:
> No, Gerry. MS was (iirc) using the term wolfpacks for clusters on Windows
> server systems that offer load sharing, redundancy, etc. I might have just
> overheard an informal or incorrect usage, I don't normally work with
> clusters.
> 
> Big corps might figure 2-3 years, but smaller ones often want five years.
> Given the pace of change "versus" the normal workload in offices, for users
> I sometimes recommend planning to replace 1/3 of the machines every year,
> and passing down the upgraded ones within the corp if needed, so someone
> always gets the most hp and the budget doesn't have to take a whole whack
> every 3rd year. But so much depends on what people are doing and what they
> need. Someone just using Office and "typing" could easily go 5 years, and of
> course there's an awful lot to be said for staying on a stable platform and
> mix in the rare care someone has them.<G> But after 5 years, the technology
> has generally moved so far that an urgent case can be made for an upgrade,
> for reliability and other issues.
> 
> For a mapping server for APRS, barring any huge surge in APRS users after
> all those years, I think we vould predict demand fairly well and expect a
> "sufficient" machine to live that long and keep doing whatever job it was
> doing when it was set up. Replace it faster? Dunno, unless the Boy Scouts
> require APRS badges, I can't see why there would be any change in the
> workload, or the maps, in that time frame. So, no pressing case to upgrade
> sooner. And I think, something to be said for the "donors" if we could say
> "this investment will pay you back for five years" or "your investment will
> carry us for five years" making a case for a larger donation that just a
> shot for one piece here or there, to be repeated every year with another
> annual plea. (PBS Telethon, anyone?<G>)
> 

-- 
Gerry Creager -- gerry.creager at tamu.edu
Texas Mesonet -- AATLT, Texas A&M University	
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.847.8578
Page: 979.228.0173
Office: 903A Eller Bldg, TAMU, College Station, TX 77843




More information about the aprssig mailing list