Greg_Spain wrote:If I try to run Webscrape within Awasu, it brings up the message "Updated xxxx Ok", but no feed items appear. WebscrapeSettings, however, returns a 407 proxy authentication error
Yes, the way WebScrape and WebScrapeSettings download the page is different. WebScrape just downloads it every time it's run, WebScrapeSettings downloads it once and saves it away, then passes that file to WebScrape every time you want to test your configuration (the idea being that this will happen a lot, so you don't want to have to wait for the page to be downloaded every time).
WebScrape was written by one of our users and he's no longer involved with Awasu. He never released the source code which means we can't fix it to go through a proxy. I've been having a play with it and the only way I can think of to get it to work would be to write an intermediate plugin that downloaded the page, through a proxy if necessary, saved it to a temp file and then told WebScrape to process that.
It's a bit messy but it shouldn't be hard to do. Awasu passes through an INI file to the plugin containing all the information it needs to do it's thing. You would configure the channel to run this intermediate plugin (let's call it WebScrapeProxy.py) that read this INI file to figure out the URL to be scraped and downloaded it to a temp file. It would then tweak the INI file to point to the temp file (instead of the real URL), then call WebScrape.exe. WebScrape.exe would have no idea it was not being called by Awasu, it just does what it's told to do in the INI file so it would do it's thing, WebScrapeProxy.py deletes the temp file and control returns back to Awasu...