[METHOD] Alternative Way to Setup Your Scraper Accounts -JARVEE

i’m pretty sure i followed your steps and tried many times.
just sent to team support.
thanks Hadi you rock

1 Like

Do you have full emulation turned on on the scrapers? If yes, then that’s the reason for the additional api calls. It’s not necessary to have api full emulation turned on, on your scrapers… all its is doing is making a bunch of additional api calls that aren’t necessary for what you’re using them for, which will lead to them being blocked. Once I turned off full emulation, my scrapers have stopped getting blocked, and I havent had to replace any of them going on 4 weeks now.

Edit- I want to add, its important to setup the delays that @Hadi has shown for the scrapers. I changed the numbers a little bit as I was still getting alot of email confirmations.

1 Like

Does anyone know if this method also works when the main is set to “send users to extracted list”. I just want to do a big user extraction, with one main defining the filters, but the scraper army doing the actual scraping.

So, does anyone know if the send users to extracted list work?

Good points, thx!
Do you mean ‘api full emulation’ or/and ‘Optimize api calls to do more actiins’ buttons?
What about setting scraper accounts also to 'Use only api to save bandwidth’?

1 Like

Checking “Use only API to save bandwidth” won’t really help, as scrapers are already using only API to scrape so no point in that :man_shrugging:

This is actually the main point of this method, if I understood you correctly. You don’t set anything on your scraper account (all tools are off), but you set everything on your main account. So you edit settings on your like tool, for example, check “send users to extracted posts” and it will work. If this is what you are actually asking?

1 Like

So can scrapers support in api calls then for normal account scrapers with settings? @Lordhungry Because you cant access the tools configurations of scraper accounts and not normal old scraping accounts

Not exactly. This method explains it will send those extracted users to “specific” users list, but not sure if it will actually work when “send to extracted user list”. I just want to collect them in the “extracted users” list, nothing else.

Hello guys
Do you face problems with your API scrapers since yesterday?
Mine are locked, requesting PV on creation phone number (which I don’t have)

1 Like

Why old Phone Verifications...? If I have new Phone Nr and Emailhttps://mpsocial.com/t/why-old-phone-verifications-if-i-have-new-phone-nr-and-email/109521

Not exactly but sort of — my main account API calls have skyrocketed /hour and /day. About 10X what it was a few days ago. No settings were changed and max values /day were already reached. Getting more API errors too. Going to slow it down before it gets blocked.

Nope, haven’t noticed anything different in any of mine atm.

API full emulation - keep that unchecked
Optimize api calls - checked
Use only api to save bandwidth - checked

Make sure “use embedded browser only” isn’t checked. Now select all your scapers, and select “re-login accounts”

You’re getting api errors on your main account? If you’re using scapers, and have optimize api calls that shouldn’t be the case.

4 Likes

Thanks for the support. Strange indeed. My scrapers are making 5-8% API calls vs the main account. It’s like the main is not using the slaves at all.
Main acct:
Optimize api calls - checked
API full emulation - checked (maybe this is my problem?)
Use embedded browser only - all unchecked

Rest of settings per OP

Just tried testing without API full emulation, but got an error 0 during re-login.
TBC.

If you’re using api on main account, then yes full api emulation is necessary nowadays… but it shouldn’t be taking you over the top of your normal api calls.

May I ask, which actions are you performing daily on your main account? Have you setup delays incase of error on main? I’ve honestly never had a problem with api calls on main.

If you can check your logs, try to find the error message for the error 0 you got, if you see the message “Please wait a few minutes before you try again.” you should wait 1-2 days before trying to verify your account again.

1 Like

Main acct performs from 4am to 8pm:

  • Like ≈ 300/day
  • Follow set at max 120/day
  • Unfollow set at max 120/day

I do have a few filters checked for the Follow settings (9 checked options)
Yes I do have delays set, which were hit early in the day when API calls clocked in at 8,000. I stopped the account completely to figure what’s happening, it was the 2nd day this was happening. At that time, the follow / unfollow counts were at 45 only. I found it strange.

My two scrapers are barely hitting 300 API calls a day.

That was helpful thanks. I thought it was an antivirus related thing (as per Jarvee code) but seems it’s proxy related, huh! I’ve been running this ‘2 scrapers for my main account’ on same home proxy for months, never had an issue other than blocks once in a while when I tried going too aggressive.
Can’t login, but I can run the embedded browser though (?)

UPDATE: I did the antivirus steps (adding Jarvee to exclusions, then running Jarvee updater) and it solved the problem (y)!

2020-06-17 23:08:59.8179 - Login for username:account name
2020-06-17 23:09:20.5505 - failRequest:account name Api call not allowed without a proxy or proxy error(check proxies in Proxy Manager).
2020-06-17 23:09:20.5505 - Error executing an Api Login Operation for account name: code 0 - .
2020-06-17 23:09:20.5655 - Login failed with unknown errorcode:account name

300 likes… 120 follows, 120 unfollows, is alot of actions. Shouldn’t be enough to take you over your limit though.

Me personally I’ve never ran follow and unfollow on the same day, and it’s always worked better than trying to run them on the same day.

1 Like

Will alternate Fol/Unf days to see what happens, thanks for the tip!

1 Like

Glad you have fixed it! it seems the antivirus has deleted some important file in your Jarvee folder, so adding the folder to your antivirus exclusion list, then run the updater file to get that missing file helped fix the issue.

1 Like