Opinion needed - scrapers tagname method vs classic method. Huge difference?

So this has been puzzling me for a long time. I thought I would outline what my experience has been and see what was yours, as it just insane lol.

TL:DR classic scraper method got me 50 users per scraper daily and the tagname method gets me less than 5, wtf? lol

Earlier this year when I just got back into Jarvee I was running 40 slaves and 30 scrapers, using classic method to scrape. In case if you are not aware, this means I had normal Instagram accounts as scrapers. Not scraper accounts added and would use the follow tool and feed into main accounts as ‘Follow Specific Users’.

Interestingly, I had some scrapers shut down but actually very few captchas/EV and ultimately I was able to cap my limits on all 40 slaves daily easily being 800 specific users fed into the accounts.

I even had less than 20 scraper accounts at some point and still able to keep all 40 slaves full of sources daily. Cuz I was lazy to top it up haha.

Of course, I wanted to scale and decided to hop back on tagname method, took a while to find the correct settings with api/eb to stop losing the accounts. Obvious reasons, it’s free on 70accs+ license and is easier to manage compared to classic method.

So on this particular test Jarvee license on another VPS, I now have more than 400 scraper accounts, multiple proxies and around 100 slave accounts. Those slaves are at 20-30 follows warm up stage at the moment only. And like 30% of the accounts are not able to reach the follow limits already. Of course this is because I hit api/eb scraping limits on scrapers and run out of actions. I know that.

But, how on earth is it possible that 20 or, let’s be generous - 30 scrapers using classic method were able to keep 40 accounts full of sources meaning adding at least 50 new users to each slave account.

Meaning at least 1500 users were being scraped daily by just 30 accounts.

Now, once the tagname method is active, I have 400+ but let’s keep it simple. 400 scrapers, 20 follows per slave currently in its warm up phase, 100 slaves total - that’s 2,000 users failing to be reached daily.

I get it, before I was using slaves to do EB/API calls and that was removing a lot of strain from the scrapers. Allowing them to just do scraping and nothing else.

But how can the difference be so big? It almost doesn’t make sense. I updated the sources to make sure they’re high quality, mobile proxies are spinning well etc.

Like I’m esitmating I would need 2000+ scrapers or more to keep 100 slaves with 100 daily follows actively, how is this possible? lol.

Admittedly, I have seen much less captcha/EV on my actual slaves since all the API/EB actions got down to single digits daily but at this point I’m contemplating on just reversing it all back to classic method as the scrapers expenses starting not to make sense.

How is your experience with this?


Instagram continues to be strict with scraping so what could have been effective before may not be applicable now. Using the tagname scrapers work well nowadays especially it has allowed to lengthen the lifespan of scrapers.

If you have been using the same sources going back to when you were using the classic scrape method, please try updating your sources and adding new ones too.

The advantage of the tag name method is the sources and settings will be on the main account and when the main accounts executes it will use the scraper to scrape. Also, having 3 scrapers per main account is recommended so if one scraper gets delayed it will use another scraper.

I have indeed updated the sources and made sure they’re as good as possible from hype auditor, that helped a tad bit but not a lot.

Maybe I wrote too much in my initial post xD But actually the classic method I used was just a week ago not long time ago and my scrapers were surviving for much longer when I used classic. Plus way more users scraped vs tagname. That’s why I’m so puzzled.

Yea it makes sense, so that’s why I’m so confused that I have 4 scrapers per main account and I’m struggling to reach 20-30 followers per slave.

Do you have the same filters that you were using in the Classic method?

yep, exactly the same. In fact the filters are very relaxed so it shouldn’t be that. I did however notice another thread about CPU running at 100%. I further investigated and noticed that very often I was spiking to 100%. Apparently there’s a glitch which results in a lot of EB getting stuck and not closing down. So I implemented auto restart at 5am (when all the tools are sleeping) + enabled experimental force killing browsers and that seems to really help with CPU not being maxed anymore. I followed 40% more the same day I implemented this change, looking promising!