# ScraperAPI - Marketing Research Report

Generated on: April 10, 2026
**Industry:** Developer Tools
**Website:** https://www.scraperapi.com

## The Takeaway

ScraperAPI wins by being the only infrastructure that e-commerce teams can't build faster themselves—40M proxies and specialized endpoints compress months of proxy management into an API call.

---

# Company Research

## Company Summary

ScraperAPI is a SaaS-based web scraping company that provides proxy API solutions for businesses to extract data from websites without getting blocked [1]

**Founded:** 2018 [14]

**Founders:** Not publicly disclosed [1]

**Employees:** Small team, currently hiring more employees as of 2024 [5]

**Headquarters:** Not publicly disclosed [1]

**Funding:** Acquired by SaaS.group, currently generating $400k per month in revenue [5]

**Mission:** To make web scraping accessible and scalable by handling proxies, browsers, and CAPTCHAs so customers can get HTML from any web page with a simple API call [1]

**Strengths:** The company's strengths rely on the combination of high success rates with robust proxy management, ease of use for developers, and specialized e-commerce scraping capabilities. [11]

• **Global proxy network**: Access to over 40 million proxies located across 50+ countries ensuring reliable data extraction without getting blocked [13]
• **High success rate**: Record-fast results even for the toughest websites to scrape with industry-leading performance metrics [11]
• **Developer-friendly API**: Simple API call structure that saves hours of development work and reduces operational costs [13]
• **E-commerce specialization**: Dedicated structured data endpoints for major marketplaces like Amazon, Walmart, and Google Shopping [15]

## Business Model Analysis

### 🚨 Problem

****Web scraping faces major technical barriers including proxy management, CAPTCHA solving, and anti-bot detection systems** [1]**

• Websites implement sophisticated anti-bot measures that block automated data collection attempts [13]
• Managing proxy rotation and maintaining high success rates requires significant technical expertise and infrastructure [1]
• CAPTCHA challenges and browser fingerprinting make large-scale scraping complex and unreliable [1]
• Developers waste hours building and maintaining scraping infrastructure instead of focusing on data analysis [13]
• Getting blocked by target websites leads to incomplete data collection and project delays [13]

### 💡 Solution

****ScraperAPI handles all technical complexities through a simple API that manages proxies, browsers, and CAPTCHAs automatically** [1]**

• Simple API call structure that returns HTML from any web page without technical complexity [1]
• Automatic proxy rotation using a global pool of over 40 million proxies across 50+ countries [13]
• Built-in CAPTCHA solving and browser fingerprinting protection [1]
• Structured data endpoints for major e-commerce platforms like Amazon, Walmart, and Google Shopping [15]
• JavaScript rendering capabilities for dynamic websites [8]

### ⭐ Unique Value Proposition

****ScraperAPI stands out as the best overall web scraping API due to its high success rate, robust proxy management, and ease of use** [11]**

• Record-fast results even for the toughest websites to scrape with industry-leading performance [11]
• Global proxy network of 40+ million IPs across 50+ countries for reliable access [13]
• Specialized e-commerce scraping with structured endpoints for major marketplaces [15]
• Developer-friendly API that saves hours of development work compared to building in-house solutions [13]

### 👥 Customer Segments

****ScraperAPI serves over 10,000 companies ranging from startups to large enterprises across multiple industries** [14]**

• Startups and small businesses needing cost-effective web scraping solutions [14]
• Large enterprises requiring scalable data extraction for business intelligence [14]
• E-commerce companies collecting competitor and market data [15]
• Real estate businesses automating property listing data collection [13]
• Marketing agencies gathering consumer and competitor insights [13]

### 🏢 Existing Alternatives

****ScraperAPI competes in a crowded market with established players like Bright Data, Oxylabs, and emerging solutions** [10]**

• Bright Data and Oxylabs are considered in a different league entirely with enterprise focus [10]
• ScrapeOps offers similar API-based scraping services [10]
• Oxylabs provides Web Scraper API starting at $49/month [7]
• Scrape.do offers 1,000 successful API calls per month on their free plan [8]
• Apify provides unified platform for web scraping and browser automation [12]

### 📊 Key Metrics

****ScraperAPI has achieved significant scale with over 10,000 customers and $400k monthly recurring revenue** [5]**

• Over 10,000 companies currently using the platform [14]
• $400k per month in revenue as of 2024 [5]
• $3 million in annual revenue achieved in 2020 [3]
• Access to 40+ million proxies across 50+ countries [13]
• 5,000,000+ API credits available in enterprise plans [6]

### 🎯 High-Level Product Concepts

****ScraperAPI offers a comprehensive suite of web scraping tools including general API, structured data endpoints, and specialized scrapers** [13]**

• General web scraping API for any website with automatic proxy rotation [1]
• Structured data endpoints for Amazon, Walmart, and Google Shopping [15]
• Target-specific scrapers for major retail platforms [17]
• JavaScript rendering for dynamic websites [8]
• Real estate data collection tools for property listings [13]

### 📢 Channels

****ScraperAPI uses digital marketing, developer community engagement, and content marketing to acquire customers** [11]**

• Content marketing through comparison articles and best practices guides [11]
• Developer community outreach and technical documentation [9]
• SEO-optimized website with detailed product information and use cases [13]
• Partnership with hosting providers like Digital Ocean for customer case studies [14]
• Free trial offerings to attract new users [8]

### 🚀 Early Adopters

****Early adopters are typically developers and data teams at startups who need reliable web scraping without building infrastructure** [14]**

• Startup developers looking for cost-effective alternatives to building scraping infrastructure [14]
• Small businesses needing competitive intelligence and market data [14]
• E-commerce companies requiring automated product and pricing data [15]
• Real estate professionals needing property listing automation [13]

### 💰 Fees

****ScraperAPI uses usage-based pricing with plans ranging from free trials to enterprise-level API credits** [6]**

• Free trial available for testing the service [8]
• Plans based on number of API requests per month rather than bandwidth [9]
• Enterprise plans include up to 5,000,000+ API credits [6]
• Spending limits can be set to control costs [9]
• Pricing scales with usage volume and concurrent thread requirements [6]

### 💵 Revenue

****ScraperAPI generates revenue through subscription-based API usage fees with monthly recurring revenue model** [5]**

• Monthly subscription fees based on API call volume [9]
• $400k per month in recurring revenue as of 2024 [5]
• Tiered pricing structure for different usage levels [6]
• Enterprise contracts for high-volume customers [6]
• No bandwidth-based pricing, only request-based billing [9]

### 📅 History

****ScraperAPI was founded in 2018 and has grown rapidly through acquisition and revenue milestones** [14]**

• 2018: Company founded [14]
• 2020: Achieved $3 million in annual revenue and 10,000 customers [3]
• 2024: Acquired by SaaS.group [5]
• 2024: Currently generating $400k per month in revenue [5]
• 2024: Actively hiring more employees for continued growth [5]

### 🤝 Recent Big Deals

****ScraperAPI was acquired by SaaS.group in 2024, marking a major milestone in the company's growth** [5]**

• Acquisition by SaaS.group in 2024 [5]
• Partnership with Digital Ocean for infrastructure and customer case studies [14]
• Expansion of e-commerce scraping capabilities with structured endpoints [15]
• Plans to hire additional employees following acquisition [5]

### ℹ️ Other Important Factors

****ScraperAPI operates in a highly regulated environment where data privacy and ethical scraping practices are crucial** [13]**

• Focus on extracting only public data in compliance with website terms [13]
• Global proxy infrastructure requires compliance with international data regulations [13]
• Competition from well-funded enterprise players like Bright Data and Oxylabs [10]
• Market trend toward specialized scraping solutions for specific industries [15]

---

# ICP Analysis

## Ideal Customer Profile

**High-growth e-commerce and data-driven companies** with **50-500 employees** that require **large-scale automated data collection** from major marketplaces like Amazon, Walmart, and Google Shopping. [14] [15]

These organizations have **dedicated development teams** but lack the infrastructure to build robust scraping systems, preferring **API-first solutions** that save development hours and reduce operational costs. [13] They typically operate in **data-intensive industries** where competitive intelligence and market research drive business decisions. [13] [15]

**Key qualifying behaviors** include processing **100,000+ URLs monthly**, valuing **high success rates** over custom solutions, and demonstrating willingness to scale from basic to **enterprise-tier plans** as data needs grow. [6] [11]

## ICP Identification Framework

| No. | Question | Answer | References |
|-----|----------|--------|------------|
| 1 | Which of our current customers makes the most out of our products and services? Who uses it the most? Who are your best users? | Best customers are **high-growth startups and small businesses** with **dedicated development teams** who require **large-scale data extraction** for competitive intelligence and market research. [14] [3] These companies typically have **technical expertise** but lack the infrastructure to build robust scraping systems in-house. [13] They achieve **$3M+ annual revenue** and serve **10,000+ customers** by focusing on **e-commerce data collection** from major marketplaces like Amazon, Walmart, and Google Shopping. [3] [15] | [14], [3], [13], [15] |
| 2 | What traits do those great customers have in common? | Common traits include **developer-centric organizations** that prioritize **API-first solutions** and **cost-effective alternatives** to building internal infrastructure. [13] [14] They typically operate in **data-driven industries** like e-commerce, real estate, and digital marketing where **automated data collection** is mission-critical. [13] [15] These customers value **high success rates** and **robust proxy management** over custom solutions, preferring **simple API calls** that save development hours. [11] [13] | [13], [14], [11], [15] |
| 3 | Why do some people decide not to buy or stop using our product? | Primary churn drivers include **enterprise customers** migrating to **higher-tier competitors** like Bright Data and Oxylabs who offer more advanced features. [10] Some users prefer **unified platforms** like Apify that combine scraping with browser automation capabilities. [12] **Cost-sensitive startups** may switch to **free alternatives** like Scrape.do that offer 1,000 monthly API calls at no charge. [8] Additionally, companies with **simple scraping needs** may opt for **lower-cost competitors** like Oxylabs at $49/month. [7] | [10], [12], [8], [7] |
| 4 | Who is easiest to sell more to, and why? | Easiest expansion comes from **existing e-commerce customers** adding **specialized endpoints** for additional marketplaces beyond Amazon. [15] [17] **Growing startups** scaling from basic to **enterprise plans** with 5,000,000+ API credits represent natural upsell opportunities. [6] Companies already using **general scraping APIs** readily adopt **structured data endpoints** for specific platforms like Target or real estate listings. [17] [13] These customers understand the value proposition and require **higher usage limits** as their data needs grow. [6] | [15], [17], [6], [13] |
| 5 | What do our competitors' best customers have in common? | Competitor customers often prioritize **enterprise-grade features** and **unified platforms** that combine multiple data extraction capabilities. [10] [12] Bright Data and Oxylabs customers typically have **larger budgets** and require **advanced proxy management** for complex scraping scenarios. [10] Apify customers value **browser automation integration** alongside web scraping capabilities. [12] **Price-sensitive users** migrate to **free-tier competitors** like Scrape.do, indicating an opportunity to capture **budget-conscious developers** with competitive pricing. [8] | [10], [12], [8] |

## Target Segmentation

### 🥇 Primary High-Growth E-commerce Companies

**Industry:** E-commerce, Digital Retail, Marketplace Analytics

**Company Size:** 50-500 employees, $5M-$50M annual revenue

**Key Characteristics:** • **Marketplace data dependency**: Companies requiring automated collection from Amazon, Walmart, and Google Shopping for competitive pricing and inventory monitoring
• **Developer-first culture**: Organizations with dedicated engineering teams that prefer API-first solutions over manual data collection processes
• **Scale-driven operations**: Businesses processing 100,000+ product URLs monthly with need for high success rates and reliable proxy management

**Rationale:** Represents core revenue driver with proven $400k monthly recurring revenue and specialized e-commerce endpoints.

### 🥈 Secondary Data-Driven Startups

**Industry:** SaaS, Fintech, PropTech, Market Research

**Company Size:** 10-100 employees, $1M-$10M annual revenue

**Key Characteristics:** • **Technical sophistication**: Startups with engineering expertise but lacking infrastructure to build internal scraping systems
• **Cost optimization focus**: Companies seeking alternatives to expensive enterprise solutions while maintaining reliability and performance standards
• **Rapid scaling needs**: Organizations transitioning from basic to enterprise API usage as data requirements grow with business expansion

**Rationale:** Strong growth potential as companies mature from startup to enterprise customers requiring higher-tier plans.

### 🥉 Tertiary Enterprise Data Teams

**Industry:** Fortune 500, Consulting, Financial Services

**Company Size:** 1,000+ employees, $100M+ annual revenue

**Key Characteristics:** • **Large-scale data operations**: Teams processing 5,000,000+ API credits monthly for comprehensive market intelligence and competitive analysis
• **Compliance requirements**: Organizations needing robust data governance and ethical scraping practices for regulatory compliance
• **Integration complexity**: Companies requiring seamless integration with existing enterprise data infrastructure and business intelligence platforms

**Rationale:** Future growth opportunity but faces competition from Bright Data and Oxylabs in enterprise market.

## Target Personas

### Persona 1: Marcus, The E-commerce Data Lead

*Segment: 🥇 Primary*

**Demographics:**

- Name: **Marcus, The E-commerce Data Lead**
- Age: **👤 Age**: 28-35
- Job Title: **💼 Job Title/Role**: Data Engineering Lead, Analytics Manager, or Head of Business Intelligence
- Industry: **🏢 Industry**: E-commerce, Digital Retail, Marketplace Analytics
- Company Size: **👥 Company Size**: 50-500 employees
- Education: **🎓 Education Degree**: Bachelor's in Computer Science or Data Analytics
- Location: **📍 Location**: Major US tech hubs (San Francisco, Seattle, Austin)
- Years of Experience: **⏱️ Years of Experience**: 5-10 years

**💭 Motivation:**

Marcus needs **reliable marketplace data** to power pricing strategies and inventory decisions for his fast-growing e-commerce company. Current **manual data collection** creates bottlenecks and inconsistent results. He requires **automated solutions** that integrate seamlessly with existing analytics infrastructure.

**🎯 Goals:**

- Automate collection of 100,000+ product URLs monthly from major marketplaces
- Reduce data collection costs by 40% while improving success rates above 95%
- Build real-time competitive pricing dashboards for executive decision-making

**😤 Pain Points:**

- Existing scraping solutions get blocked by Amazon and Walmart anti-bot systems
- Internal development team spends 20+ hours weekly maintaining proxy infrastructure
- Inconsistent data quality impacts pricing decisions and competitive analysis accuracy

### Persona 2: Sarah, The Startup CTO

*Segment: 🥈 Secondary*

**Demographics:**

- Name: **Sarah, The Startup CTO**
- Age: **👤 Age**: 30-40
- Job Title: **💼 Job Title/Role**: CTO, VP of Engineering, or Lead Developer
- Industry: **🏢 Industry**: SaaS, Fintech, PropTech, Market Research
- Company Size: **👥 Company Size**: 10-100 employees
- Education: **🎓 Education Degree**: Master's in Computer Science or Engineering
- Location: **📍 Location**: Global tech startup hubs (SF Bay Area, NYC, London, Berlin)
- Years of Experience: **⏱️ Years of Experience**: 8-15 years

**💭 Motivation:**

Sarah leads **technical strategy** for a scaling startup that needs **market intelligence** without diverting engineering resources to infrastructure. She prioritizes **cost-effective solutions** that deliver enterprise-grade reliability. **Rapid scaling demands** require tools that grow with the business.

**🎯 Goals:**

- Implement data collection infrastructure without hiring additional DevOps engineers
- Scale from 10,000 to 500,000 monthly API calls as company grows
- Maintain sub-$5,000 monthly data collection budget while expanding coverage

**😤 Pain Points:**

- Limited engineering bandwidth to build and maintain custom scraping solutions
- Pressure to keep operational costs low while delivering enterprise-quality data
- Need for reliable solutions that won't require constant technical maintenance

### Persona 3: David, The Enterprise Analytics Director

*Segment: 🥉 Tertiary*

**Demographics:**

- Name: **David, The Enterprise Analytics Director**
- Age: **👤 Age**: 35-45
- Job Title: **💼 Job Title/Role**: Director of Analytics, Head of Data Strategy, or VP of Business Intelligence
- Industry: **🏢 Industry**: Fortune 500, Consulting, Financial Services
- Company Size: **👥 Company Size**: 1,000+ employees
- Education: **🎓 Education Degree**: MBA or Master's in Data Science
- Location: **📍 Location**: Major business centers (NYC, Chicago, London, Toronto)
- Years of Experience: **⏱️ Years of Experience**: 10-20 years

**💭 Motivation:**

David manages **enterprise-scale data operations** requiring 5,000,000+ monthly API credits for comprehensive market analysis. He needs **compliance-ready solutions** that meet regulatory standards. **Executive stakeholders** demand reliable insights for strategic planning.

**🎯 Goals:**

- Deploy enterprise-grade data collection supporting multiple business units globally
- Ensure 99.9% uptime and compliance with data governance policies
- Integrate scraping capabilities with existing Tableau and Snowflake infrastructure

**😤 Pain Points:**

- Current solutions lack enterprise features needed for Fortune 500 compliance requirements
- Budget pressure to justify ROI on expensive data collection tools versus alternatives
- Integration challenges with existing enterprise data infrastructure and security protocols

---

# Positioning & Messaging

## Positioning Statement

**ScraperAPI** is a **web scraping infrastructure service** for **data-driven companies** that **delivers reliable large-scale data extraction** with/because of **record-fast results and specialized e-commerce endpoints, powered by a global network of 40+ million proxies across 50+ countries**

## Positioning Framework

### 1. Needs and Pain Points

What are their customer's needs and pain points around the problem the product is trying to solve?

• Websites implement sophisticated anti-bot measures that block automated data collection attempts [1] [13]
• Managing proxy rotation and maintaining high success rates requires significant technical expertise and infrastructure [1] [11]
• CAPTCHA challenges and browser fingerprinting make large-scale scraping complex and unreliable [1]
• Developers waste hours building and maintaining scraping infrastructure instead of focusing on data analysis [13]
• Getting blocked by target websites leads to incomplete data collection and project delays [13]

### 2. Product Features

What product features will address these needs and solve these pain points?

• Simple API call structure that returns HTML from any web page without technical complexity [1] [13]
• Automatic proxy rotation using a global pool of over 40 million proxies across 50+ countries [13]
• Built-in CAPTCHA solving and browser fingerprinting protection [1]
• Structured data endpoints for major e-commerce platforms like Amazon, Walmart, and Google Shopping [15]
• JavaScript rendering capabilities for dynamic websites with high success rates [11]

### 3. Key Benefits

What are the key benefits (rational and emotional) of those product features?

• Record-fast results even for the toughest websites to scrape with industry-leading performance [11] [13]
• Saves hours of development work and reduces operational costs compared to building in-house solutions [13]
• Global proxy network ensures reliable data extraction without getting blocked [13]
• Specialized e-commerce scraping with structured endpoints eliminates manual data processing [15]
• Developer-friendly API that scales from startup to enterprise usage levels [6] [14]

### 4. Benefit Pillars

Which of those benefits would be categorized as benefit pillars?

🚀 Instant Scale & Reliability, 🎯 E-commerce Intelligence

### 5. Emotional Benefits

What emotional benefits would the user have when they engage with or use the product?

Core Emotional Promise:
Confidence that your data collection will never fail when business decisions depend on it [11] [13]

Supporting Emotions:
• Relief from technical complexity and infrastructure management stress [13]
• Pride in delivering consistent, reliable data insights to executive teams [14]
• Security knowing your competitive intelligence won't be compromised by blocking [13]

### 6. Positioning Statement

What are some positioning statements that could reflect its key benefits, product features, and value?

ScraperAPI is a web scraping infrastructure service for data-driven companies that delivers reliable large-scale data extraction with record-fast results and specialized e-commerce endpoints, powered by a global network of 40+ million proxies across 50+ countries [1] [11] [13] [15]

### 7. Competitive Differentiation

How do they differentiate from other competitors?

ScraperAPI stands out as the best overall web scraping API due to its high success rate, robust proxy management, and ease of use [11]

vs. Bright Data: More accessible pricing and developer-friendly API compared to enterprise-focused complex solutions [10]
vs. Oxylabs: Specialized e-commerce endpoints and higher success rates than their $49/month general offering [7] [15]
vs. Scrape.do: Enterprise-grade infrastructure and specialized marketplace support beyond basic free tier limitations [8] [15]

Key Differentiators:
• Specialized structured data endpoints for Amazon, Walmart, and Google Shopping [15]
• 40+ million proxy network with 50+ countries coverage [13]
• Industry-leading success rates even for toughest websites [11]

## Messaging Guide

| # | Type | Message | Priority |
|---|------|---------|----------|
| 1 | 🎯 Top-Line Message | Get the data you need from any website without getting blocked - ScraperAPI handles all the technical complexity so you can focus on insights, not infrastructure [1] [13] | Primary |
| 2 | 🚀 Instant Scale & Reliability | Scale from 1,000 to 5,000,000+ API calls seamlessly with industry-leading 99%+ success rates [6] [11] | High |
| 3 | 🚀 Instant Scale & Reliability | 40+ million proxies across 50+ countries ensure your data collection never gets blocked [13] | High |
| 4 | 🚀 Instant Scale & Reliability | Record-fast results even from the toughest websites that block traditional scraping methods [11] | Medium |
| 5 | 🚀 Instant Scale & Reliability | Simple API call replaces weeks of infrastructure development and ongoing maintenance [13] | Medium |
| 6 | 🎯 E-commerce Intelligence | Purpose-built structured endpoints for Amazon, Walmart, and Google Shopping deliver clean, parsed data instantly [15] | High |
| 7 | 🎯 E-commerce Intelligence | Automate competitive pricing and inventory monitoring across major marketplaces with one API [15] [17] | High |
| 8 | 🎯 E-commerce Intelligence | Skip the parsing - get structured product data, reviews, and pricing ready for your analytics [15] | Medium |
| 9 | 🎯 E-commerce Intelligence | Monitor competitor pricing across Target, Amazon, and Walmart in real-time without manual processes [17] [15] | Medium |
| 10 | 🎯 E-commerce Intelligence | From startup to enterprise, trusted by 10,000+ companies for mission-critical data collection [14] | Medium |

---

# References

[1] Scraper API - Crunchbase Company Profile & Funding
   https://www.crunchbase.com/organization/scraper-api

[2] Scraper API 2025 Company Profile: Valuation, Funding & Investors | PitchBook
   https://pitchbook.com/profiles/company/436727-62

[3] How Scraper API hit $3M revenue and 10K customers in 2020.
   https://getlatka.com/companies/scraper-api

[4] Scraper API’s Competitors, Revenue, Number of Employees, Funding, Acquisitions & News - Owler Company Profile
   https://www.owler.com/company/scraperapi

[5] Acquiring & Growing a Scraping SaaS to $400k/Mo
   https://www.failory.com/interview/scraper-api

[6] Compare Plans and Get Started for Free - ScraperAPI Pricing
   https://www.scraperapi.com/pricing/

[7] Web Scraper API pricing - Free Trial
   https://oxylabs.io/products/scraper-api/web/pricing

[8] Web Scraping API Pricing | Scrape.do
   https://scrape.do/pricing/

[9] Plans & Billing | ScraperAPI Documentation
   https://docs.scraperapi.com/resources/faq/plans-and-billing

[10] Best Web Scraping APIs in 2026: ScraperAPI vs ScrapeOps vs Bright Data vs Oxylabs (Honest Comparison) - DEV Community
   https://dev.to/agenthustler/best-web-scraping-apis-in-2026-scraperapi-vs-scrapeops-vs-bright-data-vs-oxylabs-honest-51d

[11] The 8 Best Web Scraping APIs in 2024 (Pros, Cons, Pricing)
   https://www.scraperapi.com/web-scraping/best-web-scraping-apis/

[12] Oxylabs vs. Bright Data for web scraping
   https://blog.apify.com/oxylabs-vs-bright-data/

[13] ScraperAPI: Scale Data Collection with a Simple Web Scraping API
   https://www.scraperapi.com/

[14] Customers - ScraperAPI
   https://www.digitalocean.com/customers/scraperapi

[15] Scrape eCommerce Product Data With ScraperAPI
   https://www.scraperapi.com/solutions/ecommerce/

[16] Top 10 E-Commerce Scrapers in 2026: Benchmarked & Tested
   https://aimultiple.com/ecommerce-scraper

[17] Target Scraper API - ScraperAPI
   https://www.scraperapi.com/solutions/ecommerce-data-collection/target-scraper/

[18] All-in-One Review Scraper: G2, Capterra & Trustpilot API · Apify
   https://apify.com/zen-studio/software-review-scraper/api

[19] G2 & Capterra Scraper: Software Reviews & Ratings API through CLI · Apify
   https://apify.com/sovereigntaylor/g2-reviews-scraper/api/cli

[20] Capterra Reviews Scraper · Apify
   https://apify.com/imadjourney/capterra-reviews-scraper

