Options
All
  • Public
  • Public/Protected
  • All
Menu

Class ReviewPageScraper

Manages the scraping and storage of all review pages for a single Rate Your Music user.

This class utilize the scraperapi, but unlike other RYM scrapers, has no one-to-one relationship to a single database entity. RymScraper's data flow necessitates this relationship, so ReviewPageScraper extends ScraperApiScraper instead of RymScraper.

Hierarchy

Index

Constructors

constructor

Properties

currentPage

currentPage: number

dataReadFromLocal

dataReadFromLocal: boolean

Scrapers always check for a local copy of the target resource (using Scraper.checkForLocalRecord) before executing a scrape from an external resource. If the resource was found (and therefore no external calls made), this is set to true.

description

description: string

A simple, human-readble description of what is being scraped. Used for logging.

name

name: string

pageReviewCount

pageReviewCount: number

profile

redis

used for caching failed results, to blacklist further calls

repository

repository: Repository<ReviewEntity>

results

results: ResultBatch

Contains all results generated by Scraper.scrape, including recursive calls.

reviewScrapers

reviewScrapers: ReviewScraper[]

Array of 0 and 25 of [[Review]] instances per page

Protected scrapeRoot

scrapeRoot: ParseElement

Stores the DOM retrieved by scraperapi

scrapeSucceeded

scrapeSucceeded: boolean

Flag indicating a sucessful scrape, set to true after non-error-throwing call to Scraper.scrape.

sequentialFailureCount

sequentialFailureCount: number

Number of times [[ReviewPage.scrapePage]] has failed for current [[ReviewPage.currentPage]]

url

url: string

External url indicating the scraper's target resource.

urlBase

urlBase: string

Review page URL, without a page number. Example:

https://rateyourmusic.com/collection/frenchie/r0.0-5.0/

verbose

verbose: boolean

Used to override .env settings and force-log the output of a given scraper.

Methods

checkForLocalRecord

  • checkForLocalRecord(): Promise<boolean>
  • Gets the local stored record corresponding to a given scraper. Should return null if no local record is found. By default, returns false (resource is always scraped).

    Returns Promise<boolean>

Protected extractInfo

  • extractInfo(): void

printInfo

  • printInfo(): void

printResult

  • printResult(): void

requestScrape

  • requestScrape(attempts?: number): Promise<void>

Protected saveToLocal

  • saveToLocal(): Promise<void>

scrape

  • scrape(forceScrape?: boolean): Promise<void>
  • Entry point for initiating an asset scrape. General scrape outline/method order:

    1. Scraper.checkForLocalRecord
    2. If local entity was found, update class props and return.
    3. Scraper.requestScrape
    4. Scraper.extractInfo
    5. Scraper.scrapeDependencies
    6. Scraper.saveToLocal
    7. Update class props and return
    remarks

    This method should be considered unsafe - there are several points where this can throw errors. This is intentional, and allows easier support for relational data scraping/storage. Scraped assets may have a mixture of required and non-required dependencies, the status of which should be kept in mind when implementing Scraper.scrapeDependencies. A subclass should catch and log errors from non-required scrapes. However, errors from a required scrape should remain uncaught, so the original call to a Scraper.scrape will error out before [[Scraper.save]] is called for incomplete data.

    Parameters

    • Default value forceScrape: boolean = false

      If set to true, scrapes the external resource regardless of any existing local records

    Returns Promise<void>

Protected scrapeDependencies

  • scrapeDependencies(): Promise<void>

Protected scrapeErrorHandler

  • scrapeErrorHandler(error: Error): Promise<void>

scrapePage

  • scrapePage(): Promise<void>

Static scrapeDependencyArr

  • scrapeDependencyArr<T>(scrapers: T[], forceScrape?: boolean): Promise<ScrapersWithResults<T>>

Generated using TypeDoc