Wouldn’t it be neat if instead of requiring a crawler bot to visit the page the adsense javascript could actually scrape the page text that the user sees (like a screen reader like JAWS) and use that to formulate the ad in real-time? That would prevent people from cloaking pages for the adsense bot an would allow it to be used on pages that you must be logged in to view the content. I know it’s probably impossible, but it’d still be neat.
It would be a damn smart Javascript, but man, that would require way too much resources.
User enters » JS runs [read… » ask Google » receive answer and compile] » Ad appears. That would maybe just add a couple of ms to your loading time, but Google may feel it as a DDoS since there are many sites with Adsense, and also many sites with Adsense that get numerous (say, dozens) a second, so that’s not nice.
Yes, but Google does know a thing or two about caching. Multiple hits/sec from a particular site wouldn’t be a problem with a smart caching.
The problem is that it probably would add more than just a couple of ms to the load time, and depending on the end-user’s bandwidth and connection reliability, it could get painfully slow.
Still, though, it’s certainly a nifty idea.
I have to say that this would definately be a potential privacy invasion. Without the user’s request or even knowledge, they are unwittingly transmitting their own potentially sensitive information. Add to this the increase of bandwidth on the user’s end (they just downloaded a site, and now they have to upload it!?!) and I could see that being a consumer nightmare. 😉
I wish someone would create a standard or API for advanced spider control and communication. Something like an internet doggy-door.
Really interesting idea though!
I think this would be neat actually.
First off, using XMLHttpRequest is *not* that hard on your connection.
I mean,
Take not also, that during this process the rest of the page is just chugging along merily – la la la – because XMLHttpRequest can be run in the background and after a page loads.
There are two major issue I would see with this:
nice
way, scanning through and entire page could eat a computer alive 😀It would still be a techincal “cool” feature.
DOH! Come’on Matt, no LIST elements? Geeeeeezz…..
I think this would be neat actually.
First off, using XMLHttpRequest is *not* that hard on your connection.
I mean,
1) the JS would scan for repeating words, em/strong words, patterns, etc,
2) compile a list of keywords and send that to google. Maybe along with a few text snippets.
3) Googles hops on and compiles the results – and as we all know, that’s bloody fast – and returns them
4) Ads are displayed.
Take not also, that during this process the rest of the page is just chugging along merily – la la la – because XMLHttpRequest can be run in the background and after a page loads.
There are two major issue I would see with this:
1) downloading the JS for the first time because it would probably be fairly large. Something complex (intelligent?) enough to scan for keywords using JS would be hefty code, I imagine.
2) processing time on the clients computer could be nightmareish. Unless run in a very nice way, scanning through and entire page could eat a computer alive 😀
It would still be a techincal “cool” feature.