Reset Links

Just a small request, if you have any links here on your blogroll or link list with the www. prefix, if you could change that to be just photomatt.net I would really appreciate it. I’m not sure how Google is going to handle the 301 Redirect Permanent headers so any help you can give will certainly smooth things over. Old permalinks and such, being permanent, will of course always work. Previously the “www” prefix was simply an alias that worked identically to just the hostname. I’ve come to consider this redundant; yes this page is on the “World Wide Web.” Actually in terms of user experience this change should be completely transparent, except I think I’m going to lose a lot of Google juice because of it—photomatt.net has a PageRank of 4; www.photomatt.net had a PageRank of 6 and was reindexed fairly frequently.

5 thoughts on “Reset Links

  1. I take issue with you considering the www prefix to be rendundant. (En garde! ;> ) I think it’s the other way around; http://photomatt.net (in principle, not technically) is a convenience for http://www.photomatt.net . It’s the www that differentiates whether I’m talking to the photomatt.net web server, as opposed to the photomatt.net FTP server, as opposed to the photomatt.net SMTP server, as opposed to the photomatt.net POP3 server, as opposed…. Sure, these can be condensed – SMTP, IMAP, and POP3 may all be considered part of mail.photomatt.net, differentiated by port – but they are fundamentally different.

    I also don’t think you can differentiate on protocol alone. Yes, you can argue that http://photomatt.net makes it a web page, as opposed to ftp://photomatt.net. But what is to separate web pages (ultimately HTML, even if they started life as PHP, ASP, JSP, CFM…) from, say, web services? The MIME type in the HTTP request? Useful, but asking a lot of me to
    inspect that to try and discern your server’s purpose.

    So, I submit to you (because I don’t want to go back to reading functional specs) that the www is not at all redundant.

    Thoughts?

  2. Are you talking to photomatt the web server? There is no spoon. There’s a level of abstraction between the domain name and where the request actually goes, just like there’s a level of abstraction between an URL on a site and where that file (if it exists) actually resides. You nailed it on the head with the ports. I happen to know that you’re on a cpanel server, which tells me a few things about your configuration. Even though your host may have told you that you should check your mail at mail. and FTP to ftp. and such, to the server it’s all the same. You can check your mail at ftp. and FTP to mail. if you want. Most modern servers are set up like this, hosts just tell people this because for some reason people like that better. It’s all services on the same machine (in my case at least).

    I used to be crazy about sub-domains but now if it’s all the same site I leave sub-domains alone, because they complicate everything—logs, paths, cookies, et cetera. I’m not sure exactly what you’re getting at with the web services. Why do web service requests need to be handled separately from web pages? Generally (in my experience) you’re just dealing with text/xml, and the client usually doesn’t care what it’s being served with. On your site MT responds to trackbacks (a simple web service) with a content type of text/xml and then delivers a XML document. Anyway I started writing this about an hour ago and I’ve been distracted a thousand times since then, so if this has any degree of coherence by the time you read it, I’d be interested for you to expand a little further.

  3. As long as I’m still goofing off….

    Enh, point. I’ll give you the point for a layer of abstraction (and a bonus point for the Matrix reference). I can argue the other way around that my home domain, muppetlabs.homelinux.com, is three machines, and (if I had bothered to register subdomains) they could point at different machines. Legacy convention to prefix with the service? Sure.

    But that’s not quite addressing your original point. You claimed that http://www.photomatt.net is redundant, since the resulting page obviously is on the WWW. (By similar logic, ftp://ftp.photomatt.net is redundant, as is telnet://telnet.photomatt.net.)

    I argue that it’s not redundant. There is nothing inherent in http://photomatt.net that tells me I’m going to get a web page back. (I’m narrowing “web content” down to a web page, yes.) So much can travel by HTTP these days that it’s not an indicator of a web page.

    This is where technology and standards hits a bit of convention. Sure, a request to http://photomatt.net could come back with text/html content, and technically, my web browser doesn’t care. It got content back. But I care. I want to see a web page, and not the XML results from a malformed request to a SOAP application designed to pull historical weather readings for Topeka, KS during the Great Depression. (Or something.) It’s both text/html content, but it’s vastly different in purpose.

    That’s why, to me, it’s not redundant. Not so much for a technical reason, but for a user experience one. (Speaking of, it’s bloody difficult to write an essay like this in a four line comment box. Can we get the expanding form fields back?)

  4. The comment box now expands per your request.

    Well I can safely say that every request coming to port 80 is handled by Apache on my end. I think my main point of contention is that your browser cares very much about what sort of MIME type is sent when you make a request to any part of this site. If had set up Apache to use say, an MP3 file as the index for my root directory, then hopefully when your browser sends a request to http://photomatt.net it will recognize the headers and start playing or downloading that file per your preferences, as opposed to spitting it out as text in your window. It’s the same thing if you get a text/xml page, or an application/xhtml+xml page, your browser chooses what do to with it based on the headers, in fact it even sends headers saying which type of content it accepts and prefers, thereby making content-negotiation possible.

    There really doesn’t have to be anything inherit in any URI that indicates what sort of content is going to be served. On this site for example, you will be very hard pressed to find extensions on anything but images. I talk about PHP a lot, but I could really be a Microsoft plant running this whole operation with ASP.NET. The world may never know. Going further, although there are thousands of “pages” on this site, it’s really just a handful of scripts running it all, it doesn’t “exist” unless it’s requested. If a tree falls on Apache, but no one requests anything, does it make a sound?

    Okay ignore that entire last paragraph because I was rambling. I think my point was this: over the HTTP protocol, nothing in the URI has to indicate what sort of resource you’ll be receiving, it’s only once the request is sent and you receive the response headers that your UA is able to ascertain the content and respond appropriately.

  5. Your point is noted, and I can’t say you’re wrong.

    However, it’s not that my browser cares about the MIME type that comes back from a request. It doesn’t. It knows how to handle all sorts of things coming back. I care about the content that comes back. I, the human, want to see a web page (conventionally held to be the expected content) when I go to a web site.

    Essentially, you’re technically very correct. I, as a human user, however, am confused. (Well, not really, but for purposes of illustration. You understand. 🙂 )

SHARE YOUR THOUGHTS