The only problem I see, is that you say you'd like around 1000 URLs which are uniformly distributed in all categories, but we have over 590,000 categories. With such a small sample I'm not sure you'll be able to get a uniformly distributed set of URLs from *all* categories. Or maybe I'm not understanding exactly what you are looking for...
I want to build an application that'll automate classification of URLs into different categories.. For this , i first want to begin with just two broad categories - educational sites and non-educational sites..
I want about 10,000 URLs to train and test my classifier.. < So these URLs should be a combo of edu and non edu sites >. Is there any way to do this..
Take a look at http://rdf.dmoz.org/, you'll need to find a way to parse the RDF into your database.
I did this the other day using the odp2db scripts from Steve's Software. They're old, but the format hasn't changed significantly so they work fine.
I found I didn't need to do the iconv and xmlclean.pl steps suggested in the readme, just uncompressed the dumps and ran the structure2db.pl and content2db.pl scripts. You'll need to create the database tables manually (see the SQL at top of script for that) and modify the connection details in the scripts before you start.