strange issue, reload required, how to do it?
Posted: Thu Apr 25, 2013 2:42 pm
Hello,
I manage to extract a table of 167 links, using hierarchical scrap as presented in the video. That works great, the next step is to scrap contents in these links
That's where things gets tricky, these pages have an heavy javascript and dynamic content. But to fully load the page I need to click the refresh button of the internal browser.
I can create kinds, and extarct data from one page loaded without issue.
but when using the "Navigate URLs at" action, I cannot reload each page, as a result Helium scraper load the complete page and is unable to scrap data.
is there any trick to force a reload of the url (with some javascript) before the extraction?
Regards,
Stephane
I manage to extract a table of 167 links, using hierarchical scrap as presented in the video. That works great, the next step is to scrap contents in these links
That's where things gets tricky, these pages have an heavy javascript and dynamic content. But to fully load the page I need to click the refresh button of the internal browser.
I can create kinds, and extarct data from one page loaded without issue.
but when using the "Navigate URLs at" action, I cannot reload each page, as a result Helium scraper load the complete page and is unable to scrap data.
is there any trick to force a reload of the url (with some javascript) before the extraction?
Regards,
Stephane