Tuesday, June 26, 2007

Group B Rallying Retrospective


Jalopnik waxes philosophic about "the good old days" (a.k.a the mid '80s) of Group B rallying when the cars were essentially unlimited in power:

And we mean unlimited. While engine displacement was strictly categorized, Group B rules failed to specify any limit in terms of boost (insert maniacal cackling here). This proved to be a loophole engineers gleefully exploited with stupefying, almost dumbfounding results. Actual horsepower numbers are murky at best and even downright cryptic. Quoted numbers for the 2.1-liter Ford RS200 for example range anywhere from 550 hp to over 800 hp. Reasons for this secrecy are many and varied. The most commonly cited are that the primitive all-wheel-drive dynamometers weren't up to the job. And because there was no cap on power, manufacturers just didn't care all that much. We would wager however, that teams didn't want the competition to know just how full-on berserk each others' cars were. But here's the skinny: Group B cars could out accelerate F1 cars. 0-60 times of less than three seconds were common – on gravel. Sadly, in the days before computerized traction control, so much unwieldy power proved to be Group B's downfall.

Of course, like the Mille Miglia and the Targa Floria, Group B rallying was too dangerous to continue indefinitely.
And then everything went very wrong. Near Sintra in Portugal, driver Joaquim Santos came out of a gully only to find dozens of fans standing at the peak. His Ford RS200 careered into the crowd, killing three and injuring more than 30. Every team immediately pulled out of the race. Soon after, Lancia's Henri Toivonen inexplicably missed a tight left-hander and plunged into a ditch. The fuel tanks of his Delta S4 ruptured and burst into flames, incinerating him and his co-driver Sergio Cresto.
There are a selection of videos of the different Group B rally cars in action - my personal favorite is the Audi Quatro Sport.

Monday, June 25, 2007

Petascale computing for Particle Physics

Google Tech Talks has a video presentation on the data collection for detectors being developed for the new Large Hadron Collider particle accelerator. The ATLAS detector they are developing has fairly unique computing requirements due to the massive amount (a Petabyte per second) of data produced. A lot of the processing requires them to very quickly reject most of the collision events (like low energy collisions) and then do more complicated analysis of the rest of the data. The abstract:
The Large Hadron Collider (LHC), scheduled to begin operation in Summer 2008, will collide protons at energies not accessible since the time of the early Universe. The study of the reactions produced at the LHC has the potential to revolutionize our understanding of the most fundamental forces in nature. The ATLAS experiment, currently being installed at the LHC, is designed to detect collisions at the LHC, to collect the relevant data and to provide a unified framework for the reconstruction and analysis of these data. This talk will review the goals of the ATLAS program and will describe the software and computing challenges associated analyzing these data. Among the relevant issues are the need to develop and maintain a unified analysis framework for use by more than 1000 scientists and the need for distributed access to large (petabyte) scale data samples, including a significant metadata component.

Wednesday, June 20, 2007

Crashing Las Vegas

Via Worse Than Failure...