and Google AppEngine

Wednesday, July 30th, 2008

newsXperiment logoI started playing with Google AppEngine a few months ago. First, tried to port over my work in into AppEngine, but gave up on it after a short while. Always being on the lookout for new and interesting ideas, I somehow came up with this experimental-mash-up-site concept; NewsXperiment. Not your everyday mashup site, something different and unique. I spent some time experimenting with the code locally and after I brought the Natural Language Toolkit (NLTK) into the mix, it immediately gained some traction. Brilaps was looking for a project to test out the new Google AppEngine and it made sense to let be the guinea pig. Google AppEngine turned out to be a great idea and it didn’t take long for me to bring this project from an idea to the first beta release. I haven’t come across anything similar yet, so if anything exists, please let me/them know.

So what is NewsXperiment? What can I do there? What is the roadmap and what were the challenges during development? I’ll try to answer those questions in this blog post.

What is NewsXperiment?

NewsXperiment is a news scrambler/generator site. In the possible simplest terms, NewsXperiment reads a bunch of RSS feeds, approximately 200, from a number of highly respected sources and scrambles their news’ titles and summaries using Natural Language Processing techniques. The idea is to create interesting, funny, and/or timely new stories based on actual real-time events as reported by news sources of all kind across the Internet.  The mash often produces comical stories such as “Princess Di Dancing with the Polar Bears at Golden Gate Bridge”. How would it come up with such a story?  Well at that time of our scrambling there was probably some unrelated news about Princess Di, Dancing with the Stars, Polar Bears, and Golden Gate Bridge.  We randomly select and break apart each story, scramble them up, and rebuild them to construct amusing and well structured stories.  The magic is in the reconstruction.  The engine is still in beta and thus the scrambled Title/Summary text still needs some refinement, but it is worth a bookmark and glance every day or so, as it already generates some pretty interesting mashups several times a day.

What can I do at

You can simply poke around and glance at a few news entries. Or if you feel like digging in more, you can rate some stories and/or comment on them. Better yet, you can write your own version of the scrambled story using the references provided for that news. On top of all that, you can provide feedback and become a true NewsXperiment star :)

Roadmap and the challenges during development?

As of Aug 3rd, 2008 the basic functionality of an interactive website is in place.

Scrambler Engine, News Upload, and Admin level CRUD operations, Visitor Comments, Visitor Rating are all implemented.

Some tech specs about NewsXperiment project:

  • redirects to
  • Built with Python
  • NewsXperiment hits Flickr per news item and grabs a relevant image.(this is the fun part)
  • Utilizes NTLK libraries within the scrambler engine that runs offline.
    • The generated output is a “zipped pickle” file and it is uploaded to Google AppEngine using
  • Runs on Google AppEngine.
    • Uses Django for server-side rendering.
    • Uses Yahoo! User Interface (YUI) Library for client-side JavaScript and CSS.

What’s in the bag for near future development:

  • Sometime in the near future, a “Fork This News” feature will be added. “Fork This News” feature will enable the visitors to make a copy of an existing news entry, and write their own version, which can be rated, commented and yet again forked over and over again. Currently, visitors can simulate doing the same thing using the “Comment” form assigned to each news item.
  • A better front-end design would be nice, but I highly doubt I’ll loose sleep on it. I absolutely wouldn’t mind if someone with good design skills taking a stab at it.
  • NewsXperiment surely needs a new logo.

I’ll leave the challenges and the technical mumba jumba to another post… Any feedback is appreciated. Please feel free to comment here. If you prefer email communique, see “About” link on for contact info. – Tech Stuff – Episode 1

Sunday, August 10th, 2008

newsXperiment logoAs stupid as it looks, and it “does NOT make any sense” at many angles, NewsXperiment bears a few interesting software technologies and paradigms.

NewsXperiment project consists of two parts: NewsXperiment Scrambler Engine (NSE), and Web frontend.

NewsXperiment Scrambler Engine runs offline and gathers, processes, scrambles and outputs a zip file that consists of scrambled news item pickles.

Once executed, NSE goes through its categorized feed repository and retrieves the feeds. Thanks to Mark Pilgrim’s excellent “feedparser” library.

Now that the feeds are read, the engine performs the following:

  • randomly picks a certain number of news items from each category as base feeds.
  • randomly associates a certain number of scrambler feeds to each base feed.

At this point, the engine has the initial data in place. There comes the scrambling…. However, before scrambling anything, all the entries picked to be scrambled need to be tagged, chunked, chinked. :)

  • Using NLTK, all the titles, and summaries read are tagged, chunked, chinked.(i love this part)
  • Accoding to the chunkie, chinckie data, each base feed item’s title and summary are scrambled with the set that was destined to be the scrambler for the base. Ofcourse, this does not always result in a well-constructed sentence.
  • At some point, the scrambling process is completed and time to generate the output file.
  • Output file is created out of each scrambled item, and consists of a list of titles, summaries and links back to the news items that are used to create them. This file is a pickle dump dictionary elements.
  • The output file is datestamped, and zipped. Zip file because, doh!, it’s compressed.  Plus, I couldn’t find a way around uploading the pickle content to Google AppEngine. Very likely a MIME type issue, but didn’t dig deep into that. A zipped pickle dump was all I needed, and I had it.

Very well, I have the zipped pickles, what do I do with them? If I cannot get them up to Google AppEngine’s data store, how possibly could I share ?