Options
All
  • Public
  • Public/Protected
  • All
Menu

Class GenreScraper

Manages the scraping and storage of a genre from Rate Your Music.

Unlike other RYM scrapes, nothing is actually being pulled from a webpage. Therefore, extends Scraper, not RymScraper However, it is still convenient to use the scraper superclass to keep everything consistent, without the unnecessary overhead of RymScraper.

Hierarchy

Index

Constructors

constructor

  • new GenreScraper(name: string, verbose?: boolean): GenreScraper

Properties

dataReadFromLocal

dataReadFromLocal: boolean

Scrapers always check for a local copy of the target resource (using Scraper.checkForLocalRecord) before executing a scrape from an external resource. If the resource was found (and therefore no external calls made), this is set to true.

description

description: string

A simple, human-readble description of what is being scraped. Used for logging.

name

name: string

Private repository

repository: Repository<RymGenreEntity>

TypeORM repository handling all data flow in/out of genre table

results

results: ResultBatch

Contains all results generated by Scraper.scrape, including recursive calls.

scrapeSucceeded

scrapeSucceeded: boolean

Flag indicating a sucessful scrape, set to true after non-error-throwing call to Scraper.scrape.

verbose

verbose: boolean

Used to override .env settings and force-log the output of a given scraper.

Methods

checkForLocalRecord

  • checkForLocalRecord(): Promise<boolean>

Protected extractInfo

  • extractInfo(): void

getEntity

printInfo

  • printInfo(): void

printResult

  • printResult(): void

requestScrape

  • requestScrape(): Promise<void>

Protected saveToLocal

  • saveToLocal(): Promise<void>

scrape

  • scrape(forceScrape?: boolean): Promise<void>
  • Entry point for initiating an asset scrape. General scrape outline/method order:

    1. Scraper.checkForLocalRecord
    2. If local entity was found, update class props and return.
    3. Scraper.requestScrape
    4. Scraper.extractInfo
    5. Scraper.scrapeDependencies
    6. Scraper.saveToLocal
    7. Update class props and return
    remarks

    This method should be considered unsafe - there are several points where this can throw errors. This is intentional, and allows easier support for relational data scraping/storage. Scraped assets may have a mixture of required and non-required dependencies, the status of which should be kept in mind when implementing Scraper.scrapeDependencies. A subclass should catch and log errors from non-required scrapes. However, errors from a required scrape should remain uncaught, so the original call to a Scraper.scrape will error out before [[Scraper.save]] is called for incomplete data.

    Parameters

    • Default value forceScrape: boolean = false

      If set to true, scrapes the external resource regardless of any existing local records

    Returns Promise<void>

Protected scrapeDependencies

  • scrapeDependencies(): Promise<void>

Protected scrapeErrorHandler

  • scrapeErrorHandler(error: Error): Promise<void>

Static createScrapers

Static scrapeDependencyArr

  • scrapeDependencyArr<T>(scrapers: T[], forceScrape?: boolean): Promise<ScrapersWithResults<T>>

Generated using TypeDoc