It is not a secret that IG gets better and better at telling human versus bot traffic apart. But obviusly humans are not the same in terms of how they use the platform. Some may spend just a minute or two every hour scrolling through their feed, others might not look into their feed at all (because they watch stories 90% of the app usage time), some may work full time as marketeers so their volume of traffic is much larger than average, etc etc etc. Timings IG app is active on users’ phone is also important part of a usage pattern. I’d expect average human to be switching between apps regularly (liked a few photos on insta, switched to Reddit, posted a comment there, switched to Chrome and visited some webpage, went back on Insta etc). I mean, I’d be very surprised to know that more than 1-2% users have their app active in the foreground for more than a hour straight. They must do something else in between.
My question is: are you aware of any research or scholarships or statistical data regarding the IG usage patterns like the ones I just mentioned? Asking because when it comes to next generation automation tooling, more and more disguise and mimicry is needed to pretend you’re “normal” human.
I also wanted to add that randomization that is supposed to protect you from IG bot detection algorithms apparently does very little good. Why? Because if let’s say your tool likes ~100 posts a day at random portions of 10 to 20 posts with random in-between intervals, on a larger time scale it is still a loop pattern! Meaning, it is possible to detect such loops inside your overall traffic if you do this for an extended period of time. Humans don’t usually do things in loops. Or do they?