Using OpenSceneGraph, this display showed the live searches from Pronto.com and the pronto logo spun in 3D. The client app was written in C++ and connected to a server that I wrote in perl using P.O.E. that parsed the live apache logs and spit out each search on the site as it happened. Later, I added a globe with lat/lon showing where the user was located using GeoIP tech before it was readily available.
I took my wife and two young kids to Virginia for 6 weeks during a datacenter build-out. It went from an empty datacenter to 200 machines all networked, VPN’d, load-balanced and in working order. This is a picture of the datacenter after 3 weeks of build-out. I did most of the work except for a little cabling help from the NOC staff.
This was built on my live search display above, but they didn’t want a live stream, so I collected 24 hours of info and sent it at midnight. This is a picture of the R&D version of the display when I went to meet with the guy who wrote the software in NYC:
This was the first cluster that I built to prove the concept of a HPC Linux cluster for doing high-resolution weather forecasts that would eventually replace our aging SGI hardware. This was in the year 2000 before Linux clusters were mainstream.
Many years ago when I was a C/C++ programmer, I used to use a really nice debugger called UPS. It started out being written for solaris, but was ported to Linux. It fell by the wayside over the years and recently, I rediscovered it. The original maintainer is not keeping it up-to-date, but another guy named Tom Hughes is still writing patches for it and keeping it up-to-date on redhat-based systems. I’m providing builds below based on his patches.
The debugger’s webpage: http://ups.sourceforge.net/
It even had a song written about it ( http://ups.sourceforge.net/main.html#song … yes, a song about a debugger ) but the song has been lost to the ages, I’m afraid.
The debugger should work on most modern redhat-based OSes that use the dwarf2 debugging symbols.
NOTE: The debugger doesn’t work under Ubuntu due to glibc differences. Sorry.
Here are my RPMs for Fedora 13 - it should install cleanly on CentOS and RedHat as well.
These greasemonkey plugins will remember the last page that you clicked on after doing a google search, so if you want to go back to an old search from a couple of days ago or whatever, your old page that you left off at will appear at the top of a similar google search for the same terms.
I’ve got a Roland JD-990 synthesizer and Reason 6.0 that I’m experimenting with. If you’d like to hear my random creations and even participate/contribute, here are the links to the repositories:
If you own a JD-990 and you use Rosegarden under Linux, you’ll want this file:
I’ve written a bunch of web crawlers.
One written in Perl that focuses on one domain and is respectful of robots.txt and respectful of bandwidth.
One hybrid that uses a combintion of Perl and C code to crawl any domain. the C crawler is used to pull pages, but Perl is used to sort through domains and urls and organize the seed lists for the crawler.
One quickie utility that is sent a url and it pulls N ‘interesting’ words from the content of the url. This is used for my projects that want to do on-demand context-sensitive content when they display on a page - much like Google’s context-sensitive ad network. They need to be able to crawl your website to find out what words are used on your webpage so they can find an appropriate ad to place next to the content. Same idea here.
NOTE: Other misc web crawling utils: C programs to speed up regex operations to pull domains from url lists and a utility to convert a list of urls from relative to absolute. Some day I’ll release most or all of this code.
SWFind.com (closed now, but the tools and API are still around)