see Scraper.verbose
Scrapers always check for a local copy of the target resource (using Scraper.checkForLocalRecord) before executing a scrape from an external resource. If the resource was found (and therefore no external calls made), this is set to true.
A simple, human-readble description of what is being scraped. Used for logging.
Contains all results generated by Scraper.scrape, including recursive calls.
Flag indicating a sucessful scrape, set to true after non-error-throwing call to Scraper.scrape.
Used to override .env settings and force-log the output of a given scraper.
Gets the local stored record corresponding to a given scraper. Should return null if no local record is found. By default, returns false (resource is always scraped).
Extracts information from a scraped resource synchronously
Prints a detailed report of local properties for a scraper, used for debugging
Simple CLI reporting tool for debugging unsuccessful scrapes
Requests and stores an external resource, to be parsed later by Scraper.extractInfo. By default, nothing is requested.
Saves scraped, extracted, and parsed information into a local record. By default, does nothing.
the entity that was saved
Entry point for initiating an asset scrape. General scrape outline/method order:
If set to true, scrapes the external resource regardless of any existing local records
Executes Scraper.scrape on any recursive scrapes found in the initial scrape. Defaults to simply resolving an empty promise, so subclasses with no dependencies don't have to implement this function. See Scraper.scrape for more information on implementation.
the entity that was saved
Intercepts any errors thrown by Scraper.scrape
Scrape the genres associated with this artist
Generated using TypeDoc
Superclass for all "scrapers"
This abstract class describes a standardized method of scraping web pages and saving the results. Its structure is specifically engineered to support complex, relational data stored in a RDBMS such as Postgres. An subclass of AbstractScraper generally describes the process of scraping one type of webpage into one database table. Each instance of a class extending AbstractScraper corresponds to the scrape of one specific URL. The general use pattern for an instance of such as class is to first call the constructor, then Scraper.scrape.