Security is a Full-Time Job

Ignorance must have been bliss in 1969. Around this time, the continental network that became the Internet was just forming. (RFC 1 was published by Steve Crocker on April 7th.) Life for a system administrator back then was a lot simpler – there was implicit trust between a computer owner and the people that could access it via the network. There were no hackers, because the people building the network were the only ones that could use the network.

Fast forward to 2004. The same architecture that simplified the deployment of the Internet now hinders it. A script kiddie can bring down any web site by forging packets – including big sites you may have heard of: Yahoo!, Amazon.com, CNN, eBay, and others. (All of these are among the 15 largest English-speaking sites, according to Alexa.)

Recently, software companies seem to have been selling the idea that keeping up-to-date on security patches will keep you safe. While it certainly helps, this isn’t going to make your system foolproof. Security problems can be caused by all sorts of other problems, including human error.

Ultimately, security is a full-time job. You can write scripts to install patches, run programs to check your traffic for unwanted activity, and scan for viruses and worms. However, if you’re not spending at least 40 hours a week reading security bulletins, testing software, and educating others, you will miss something.

Running a personal web server looks like an attractive option for many people, but when you’re figuring the cost of piggybacking on a DSL or cable connection, don’t forget to figure in who’s going to be handling your security.

If that someone is you, you might want to start reading.

Semi-annual Time Gripes

I think I’m going to make these semi-annual gripes about daylight saving time a Waileia tradition. I forgot to set Movable Type back to Nome time to compensate for the rest of the United States (except for those smart Hoosiers in Indiana and a few places in Arizona) jumping an hour into the future. Thanks for the reminder, Dean.

Elsewhere:

  • Slashdot had a poll, with the haters far outnumbering the lovers. One of the more interesting threads to emerge talks about the potential impact on cron jobs.
  • Two NBA players were benched for not making a morning practice due to daylight saving time change.

RFC: Choosing a Church

In the Internet world, a RFC traditionally specifies technical information like “protocols, procedures, programs, and concepts.” In my case, I have a specific non-technical question I’d appreciate answers to.

For about two years, I’ve been one of the unchurched. (Irene, it’s great that you’re looking for a new home so quickly!) I don’t want to say where, or for what reasons I left.

God has blessed me repeatedly with many talents, but extrovertedness isn’t one of them. There’s lots of seemingly good ideas on choosing a church, but I don’t know how to narrow the field down.

My question is thus: How did you find the church you’re in now? I’m hoping to get a lot of comments that will help me (and possibly even a visitor) get plugged in to the body of Christ again.

At the moment, I only have one requirement. I believe I’m supposed to be using my gifts of understanding technology and multimedia for ministry. Wherever I go, I think this will need to be a ministry I can join.

Like Irene, I’m asking for your prayers, and your advice. Mahalo.

RSS Hits the Mainstream

As I mentioned a few months ago, I thought that the first portal to aggregate feeds would be on strong footing indeed. Surprise! Somebody at Yahoo! heard me.

In connection with their revamped search engine, Yahoo! has been making a lot of changes behind-the-scenes. One is a new beta My Yahoo! module, RSS Headlines. The new module allows you to load up to 25 feeds per page in a format that looks similar to its news module.

I’m a little disappointed that they didn’t adopt Netscape’s old model of one module = one site, but this is really a step ahead of the competition. In fact, it enticed me to change Firefox’s home page to My Yahoo! I never bothered with home pages before.

Of course, the module is still in beta, so you have to jump through some hoops to get to it. First, it’s not listed in My Yahoo! by default; you’ll need to use this link to add it. Second, it is buggy from time to time. In fact, Slashdot had it banned at one point for hitting them over 200 times in an hour. Ouch. For the most part, though, I’ve found it very stable and useful.

You can read more about the Yahoo! launch at Jeremy’s blog, or some interesting essays about the Portal, Blog, and RSS ecosystem here.

And, for good measure, you can add Waileia to My Yahoo! here: Add to My Yahoo!

Understanding the Deep Web

An interesting Salon article describes Yahoo’s new Content Acquisition Program, which offers paid inclusion for deep-searching online databases. These treasure troves of information are often missed by search engines, which travel the links between dynamic pages cautiously.

Yahoo! has the right idea – search engines today aren’t capturing the best has to offer, because these articles are often behind query or login pages. Yahoo’s solution seems to be to extend their search engine to understand URLs of specific sites. However, many people are upset that this new program (which is basically a combination of premium offerings from their other properties) doesn’t clearly mark the “paid inclusion” links in their main index. Some people point out that paid inclusion is a conflict of interest for search engines. (One Yahoo! employee disputes this on his personal blog.)

Ultimately, I think the solution to the problem of searching the deep web will be based in XML. Perhaps what we need is a way of defining the API databases use. A language like WSDL is a good start, but WSDL doesn’t do a good job of capturing the semantics behind a web service call. What we need is a way to map the fields in a database to a common interface – something like what DBI and DB do.

We may also want to consider ways of telling spiders a little more about the sites we run. robots.txt is great, but an expanded language could provide advanced webmasters the ability to define infinite loops better, define different presentations of the same content, specify preferred crawl schedules, and more, allowing smart robots to find even more information at a site, and categorize it intelligently.

(Original link courtesy Slashdot.)