Are scrapers a fact of life? No way to completely prevent them It seems that the more popular your website gets, the more it wil


SUBMITTED BY: Guest

DATE: Nov. 5, 2019, 5:46 p.m.

FORMAT: Text only

SIZE: 11.5 kB

HITS: 414

  1. Are scrapers a fact of life?
  2. No way to completely prevent them
  3. It seems that the more popular your website gets, the more it will be under attack by scrapers.
  4. ++++++++++++++
  5. list of top cheapest host http://Listfreetop.pw
  6. Top 200 best traffic exchange sites http://Listfreetop.pw/surf
  7. free link exchange sites list http://Listfreetop.pw/links
  8. list of top ptc sites
  9. list of top ptp sites
  10. Listfreetop.pw
  11. Listfreetop.pw
  12. +++++++++++++++
  13. I've come to the realization that it's impossible and just not worth my time to completely prevent scrapers from crawling my site.
  14. I've decided that as long as the server is running smoothly, I'd rather spend my time improving the site than stopping each and every one of them which, as I said, is near-impossible to do.
  15. The only problem I have is when some extremely inconsiderate, impatient and greedy scraper decides to hammer the site with multi-threaded requests, using multiple IPs, thereby slowing the server down for everyone else, almost akin to a DOS attack. Those are the scrapers which literally make me lose sleep and I have to divert my time from improving the site to dealing with these unscrupulous cretins.
  16. In closing, I would like to say to scrapers out there. If you're going to scrape, at least do it in a somewhat "ethical" way (if you can call scraping ethical) by not overloading your victim's website.
  17. I've denied a couple user-agents by name because they are ultimately triggered by humans who are too stupid to understand that a single click on their end translates to thousands of actions on my--that is, my site's--end. If, instead, they slowly visited page-by-page and at each step told the browser to save the whole thing ... I'd never even know they were scraping.
  18. Then there's the other side of proactive ... whitelisting. Allowing ONLY those you select and banning all others. Less stress.
  19. YOU MIGHT inadvertently deny some which might be of benefit, but it is easier to add to your whitelist than it is to play whack-a-mole with a blacklist.
  20. there are several effective Blocking Methods [webmasterworld.com] that don't involve maintaining a list.
  21. that don't involve maintaining a list.
  22. There's always a list. It might be a list of IP addresses, or it might be a list of User Agents, or it might be a list of header fields and values, or it might be a list of behaviors. Any time you're matching something against something else, there's ultimately a list involved.
  23. Too true! Only decision involved is "long list" or "short list". :)
  24. brotherhood of LAN
  25. Steady with the whitelisting as there's quite a resurgence of alternative search engines nowadays, especially in Europe, after a bit of a doldrum since Yahoo bought up the alternates. It's probably the easier proactive option, as most (all?) you'd consider whitelisting come from definite hostnames and ranges.
  26. I agree with your conclusion, OP. The guys on this site will surely help you eliminate the 'background noise' of hack attempts, general purpose stuff (and more, they're on top of the subject)
  27. If anyone is determined to scrape your site, they can. Let's put it in simple terms, you can get 1 IPv4 for $1/m. They rent 30 and scrape you one page per second rotating them, 1 fetch per IP every half minute. Ban it? They can easily look like human visitors with a headless browser, you get into a recursive nightmare trying to decide otherwise. Nowadays it's simply not expensive or difficult to get a huge amount of IP diversity and combating it involves so much maintenance or trust of a maintainer.
  28. There are sophisticated anti-scraping solutions but it's counter-productive after a while. My own view is to focus on what you intend to build.
  29. As with anything we do ... test, test, test! And test for at LEAST 90 days, if not 180 days. It takes time to get any returns on what changes make a difference.
  30. Cultivate patience!
  31. There are ways of throttling anyone who hammers you server too hard. Hiawatha ( [hiawatha-webserver.org...] ) has per IP throttling, or you can use fail2ban with any Linux server.
  32. brazabux.club
  33. host papa
  34. www.trinityfunnelsystem.com
  35. domain 3 expressions and equations
  36. firefaucet.win
  37. domain g suite
  38. www.manualhits.com
  39. hitfunnel.com
  40. make money red dead online
  41. You might also find a honeypot link that is blockws in robots.txt and bans IP that follows it may work.
  42. Banning and blocking is just one way. DMCA is another, and requires less intensive effort.
  43. The other thing to bear in mind is to use the scraper as a way of branding your own site: take advantage where you can.
  44. I hear you on that, I have been trying to stop them for almost 1.5 years.
  45. Have you looked into the type of ips they are using?
  46. Are they using Hacked machines (less likely) or services on the cloud?
  47. There are hundreds of cloud servers these days like quickweb,kvchosting.com,ioflood.com,servint.net,24shells,amanah.com,yesup.com,pair,datayard.us,hostdrive,king-servers.com,greenhousedata.com,hostrocket,inmotionhosting.com,onr.com,versaweb,a2hosting,gogrid,servepath,us.net,forked.net,joesdatacenter,vividhosting,interserver and many many more.
  48. I thought the cloud servers had some type of anti-abuse mechanism from a third party to monitor abusive traffic? (X outbound requests to Y * amount of ips requesting) over a period of Z = BAN!
  49. I think the easiest way to battle this is to make a server side stats script that shows only ips that are not on your whitelist. I have since created this and I am now able to block all ips from the day before in a few minutes.
  50. When will places like Google finally block Cloud Servers from actions like performing searches ? If they are not able to make their make chunk of money, maybe they wouldn't have the resources to bot the small users like us on a daily basis. It's one of the key selling points of these cloud proxie providers these days. Here are the two i've found that are botting me:
  51. www.proxymillion.com
  52. www.cloudproxies.com
  53. I agree, there are so many cloud servers out there.
  54. Not all of it is getting anywhere, and anything that does could have a DMCA, or, as I mentioned, use the scraped site for your own branding.
  55. use the scraped site for your own branding.
  56. That works if they KEEP scraping ... However, most times one scrape is all they want and won't UPDATE with your "branded" stuff.
  57. For that to work, you have to BRAND it BEFORE they strip it for lazy nefarious reasons...
  58. Whitelist keeps 'em out before they get started ... but RESEARCH that as whitelisting means "these and only these I will let in" and to do that correctly you have to ALSO look at agents, etc.
  59. Lock it down ... or play whack-a-mole. Forever.
  60. I'm blocking about 400 million ipv4 IP addresses (in my er3-lite router) from being able to access my company website. I block chinese IP networks by the /16. Amazon's entire 54.0.0.0/8. Blocking based on the AS number and grabbing all the listed subnets when specific countries are involved, or specific entities (OVH, digital ocean, etc). Doing this big time for India and Brazil. Russia, Ukraine and eastern europe, malaysia, philipines, pretty much all of south america. We sell scientific research devices, so our market can range from US/Canada, UK and West europe, scandinavia, australia, japan, korea. Would like to sell to Russia, but that market is a black-box for us (maybe they do physics and chemistry research, but probably not much pharma/biomed stuff). Same with India. We have sold to China, but it's very rare, so I have no problems blocking Chinese IP's, including baidu bots. Also blocking yandex bots. I see lots of tor exit nodes, cloud and other hosters, vpn's (I block them all upon discovery, doesn't matter what country they're in). I see questionable hits from European and US/Canada residential ISP's, but I'm not touching those at the moment. Naturally I don't touch IP's / networks assigned to EDU's (except for blocking /24's that do IP-scans for "educational" reasons).
  61. Edit: For the past few years I've been blocking upwards of 95% of all in-use routeable IPV4 on our mail server. This has really cut down the spam. Our "contact us" web page explains that we do heavy mail filtering, and that if they haven't contacted us before, try using our gmail address first. We've switched our main phone and fax numbers to voip (voip.ms) a year ago and I just love being able to filter entire area codes (and voip is SO CHEAP! We spend more on ground coffee and cream in our office kitchen now then for telco service! So yea, lots of filtering and blocking going on.
  62. I block .... Amazon's entire 54.0.0.0/8
  63. ... We sell scientific research devices...
  64. Really? Your company is blocking Merck?
  65. NetRange: 54.0.0.0 - 54.35.255.255
  66. CIDR: 54.0.0.0/11, 54.32.0.0/14
  67. Organization: Merck and Co., Inc.
  68. Your company is blocking Merck?
  69. Have you ever had a visit from Merck? Or Halliburton or DuPont or Eli Lilly or any of the other companies that started out controlling a whole /8 before discovering they hardly need any of it--and certainly not externally--so it's more profitable to sell to AWS?
  70. No, I don't block /8 slabs wholesale either. But there's a reason they're all getting sold off.
  71. I also block many of those networks. Places like semrush and lots others. Everybody says they can benefit you, but i fail to see how. Im not going to let someone send thousands of requests to me every single day to ... benefit me? Seems the only one benefitting is them and the competator that now has insights to my data. No thanks.
  72. As implied above the very first step is to decide which bots (by category or specific) are considered beneficial, neutral, or harmful given your business model and requirements.
  73. Second step is to decide on which to allow/deny in theory. I say 'in theory' because the third step depends just how deep one wants to go down this particular rabbit hole.
  74. In broad strokes the bot war is sort of like each step only getting one half the way there - one is always closer but never all the way. The first 50% is generally easy and simple, however each subsequent 50% of the remainder is exponential harder.
  75. Because I find it fun and because I can I have quite extensive real time bot defences beyond the interest and/or capabilities of most webdevs - and I know, because of content identifiers, that my stuff is still successfully scraped albeit far less than otherwise.
  76. Which is where, as previously mentioned, one can utilise various regulatory weapons such as lawyers letters, DMCA notices, small claims court, etc. depending on circumstances. Do be aware that these options can be time consuming and expensive.
  77. It seriously helps ones case when going the legal route if one has registered ones site/content rather than relying solely on simple publication copyright as
  78. registered copyright typically allows going for damages not just a take down.
  79. A complex subject. As, it seems, is just about everything about doing business online.
  80. > Really? Your company is blocking Merck?
  81. Merck has hit us, and they've bought from us. Same with Eli Lilly. A lot of those companies have merged over the past 10, 15 years. Thing is, even if they still have huge CIDR assignments, their hits never seem to come from those cidr's. Same with email. (well, maybe back say during 2000 - 2005 their hits and emails came from there, well before I started doing IP-based web blocking. Not any more).

comments powered by Disqus