Webscraping is something that practically always has to be done custom for the very page you want to scrape. You need to adapt it to it specifically, and you need to cater for any changes that are made to the source website, continously. That's hard if you have no idea yourself, and its even harder in Perl (which every reasonable web dev has long forgotten). Are there no reliable xml sources for the stuff you want?
Anyway, Max is absolutely right - to do this in Perl is like trying to fix a car with a stone axe or write a book with stone and chisel. Still you were told how to call a perl script from php - we do know how to do that. What we don't know and can't know is the correct path and permissions setting on your server.
Quote:
Originally Posted by tbworld
Very elegantly written @Cellarius.
This is really not a programming question, merely a programming task to be completed, and should be re-asked in a 'request' forum.
My friend, probably just picked Perl as it was probably just the easiest for him and to be fair the guy has worked for very well known companies but probably just programs in his preferred language.
It's a free website and more of a hobby, so while I'm willing to pay, I'm not willing to pay a fortune LOL.
BTW That's for everyone's responses too
Quote:
Originally Posted by cellarius
Are there no reliable xml sources for the stuff you want?
Actually tried Mozenda and was thinking of the price, but it didn't work and also refresh was not every 30 seconds..
Checked a few and some were too much or some just didn't work. Some just automate web scraping to the computer and not online..Any suggestions will be appreciated...
While I note your comments about the Perl concerns, to be fair, it actually works in Command or Local format.