Portalpotty began in 1999 as yet another bandwidth saving device. I share a 56k dialup connection, so anything that means fewer bits needing to go across the wire was considered to be generally a good thing. (Which led to a rather weird squid/junkbuster setup, but that's another story.)

The Ancient:
The original portalpotty was just known as "the homebrew portal" and was comprised of two parts: The first was the portalget script, which used wget to grab a bunch of rdf files from various news sites that I read. The second, was the rss script, which used the XML::RSS perl module to parse out the headlines, and put together a simple html page. portalget was controlled by a cron job, while rss was called by the cgi interface of apache. There was good and bad about this setup. The good: Bandwidth savings. The bad: the page was being generated on every load, new content or not. The code was also a mess, so maintaining it was next to impossible.

The Old:
When I moved the portal off my home machine and on to a machine with a full-fledged connection, one of the first things I did was to re-engineer it so that it would no longer be a cgi app, but would just generate a static page on a regular basis. Added the silly "portalpotty" name. The good: the machine couldn't be flogged by repeated reloads anymore. The bad: if a rdf file wasn't available, it would make a mess of the resulting html. the code was still unmaintainable.

The New:
Eventually, I just got tired of massaging the thing into working, so I rewrote the rss script into one known as portalpotty. Instead of embedding bits of html into the script itself, I used the perl module HTML::Template to offload most of the html rendering stuff. Made up a fresh template file with a similar layout as the old version, and rewrote the frontend deal with the new mechanism. I also added some sanity checks to the portalget script, so if it wasn't able to get a fresh rdf, it had a fallback. (This still needs perfection. Minor bug.) The rdf files are no longer hard-coded within the portalpotty script, but read from a file called "rdf_list". So I can now change a source of content with a small change in this file, and pointing portalget at a new location. The good: Much easier to maintain. I could even write a whole new layout in a very short period of time. The bad: A few minor bugs hanging around, although they are getting squashed as I find them. One annoying bit is that HTML::Template isn't very cooperative when it comes to non-standard ascii characters, so I keep having to add filters in the script to replace those when I see them. The biggest problem was the nonstandard apostrophe that Microsoft loves using. Jerks.

The Future
Obviously, any sudden bugs would need to be knocked down. I don't think this one is going to be subject to the feature creep, as I don't have that much interest in developing it much beyond where it is now. Theoretically, you could add some user-specific stuff, but that's a ball of wax that I just don't care to pursue right now.

The Source
The old
The new
The even newer.

Update 3/13/2002:
Things keep repeatedly breaking due to people graciously leaving html in their RSS, or using extended ascii characters which XML::RSS barfs on. So I've had to modify the script so it strips out most high-ordinal ascii values, and I used lex to write a quick program that strips out all HTML tags.
Update 10/08/2002:
A while back I added some more sanity checking to the script so if it now barfs, it will just insert a picture of the swedish chef and a message about the feed being borked. It's been quite a bit more stable since, and the rest of the feeds will now be rendered happily instead of giving munged output.
Bork! Bork! Bork!
Bork! Bork! Bork!

Update 07/17/2004:
I took down portalpotty. I was the only one left reading it, and its life had outlasted its usefulness for me.

Update 07/28/2005:
Bringing up content again for the website. I won't rule out future versions of portalpotty, but I've been too busy playing videogames to care much. That there's plenty of standalone RSS readers *almost* makes me want to say "But can't I use one of them?", but somehow aggregating web content in a program external to my webbrowser seems boneheaded to me.