I am trying to use the 'Navigate URL' function on some URLs. The program will go to the URL, but then it hangs there in execution mode for minutes and the rest of the actions (extract) do not run. I turned off 'suppress script errors' and there were some java issues on the pages. Is there a way to tell 'Navigate URLSs' to stop executing after 20 seconds and ignore the errors?
The URLS I am pulling are below.
Questions and answers about anything related to Helium Scraper
1 post • Page 1 of 1