Scrapers burning to hell

It appears Jarvee team is probably reading this post :slight_smile: Thanks for the update! Now it shows API/ EB actions on the scrapers
image

I’ve put aside 3 new scrapers to test this method today, but will run a bigger test group next week of 2x 20s to see if this will indeed improve the results. It’s fascinating that you only lost 1 scraper on DC proxies in 2weeks+.

Well done mate!

I’ll PM my settings so you can copy them precisely (API Emulation and others).

Hey Guys! What are the current recommended scraper settings? I used Jarvee supports one and it caused a massive amount of captcha. They recommended up to 100 an hour and 400 api calls then a 400-800 minute break.

I tried a super low setting with 1600 scrapers and no actons would go because the one scraper couldn’t do the first task. Seems 50-60 api calls is the minimum.

yes because even 100 API calls an hour is a small number especially if you accounts are doing many actions and have many filters, I would say keep it between 70 and 80 API calls per hour and 300 per day lower than this and you will be getting many Delayed status.

Are you scraping with follow or scrape tool? I am having a problem last few days when I am scraping with follow tool. I am getting errors like “extracted 0 users” too often. I don’t know what can be the reason, do you have any ideas?

Are you using the Hashtags source? If yes, please try to uncheck ‘scrape with eb where possible’ from the account and from the scrapers, and see if you will get results after.

Yes, I am using the hashtag source. That is already unchecked. Only partial API emulation and random actions via API are checked and it worked great for a long time. :slight_smile:

Does the proxy show valid status in Proxy Manager tab? Do you have the “enable scrape different users across all accounts” option checked?

Yes, proxy is valid. “Enable scrape different users…” option is checked. I also added many popular hashtags in follow source.

It looks this is the problem “Search took more than 20 minutes and was aborted. Please loosen up your settings/adjust sources”. But this is impossible, hashtags are too big. Can I fix this somehow?

i guess you already checked you filters and sources and you have made sure that you have enough, did you check the API scrape blocks and see if you have any, how many scrapers do you have and how much data do you scrape?

My scrapers are just getting killed after 40-50 api calls, tried many proxies and suppliers. It’s getting very annoying as I have like 3-6 months with no issues. Is anyone successfully running Jarvee wth 250+ accounts with scrapers? (Not jarvee support people)

Yes, I checked, I don’t have API scrape blocks. I am using around 200 scrapers but with shifts. Can this be related to my proxies, maybe some of the IPs are blacklisted? I have no idea. I am using a rotating 4g proxy. Because scraping works great for a few days, then not working for a next few days(“extracted 0 users” error). All the time I am using same settings, same scrapers and same proxies.