Solve Web Crawler Errors in PHP

I’m working on creating a simple web crawler using PHP, aiming to crawl .edu domains by providing seed URLs of parent pages. To implement this, I’ve used Simple HTML DOM, and I’ve written some of the core logic myself to manage the crawling process effectively.

Below is the code I’ve put together, followed by an explanation of the issues I encountered.

My Script With Error:

codeprivate function initiateChildCrawler($parent_Url_Html) {

global $CFG;
static $foundLink;
static $parentID;
static $urlToCrawl_InstanceOfChildren;

$forEachCount = 0;
foreach ($parent_Url_Html->getHTML()->find('a') as $foundLink) {
$forEachCount++;
if ($forEachCount < 500) {
$foundLink->href = url_to_absolute($parent_Url_Html->getURL(), $foundLink->href);

if ($this->validateEduDomain($foundLink->href)) {
// Else condition can be implemented later
$parentID = $this->loadSaveInstance->parentExists_In_URL_DB_CRAWL($this->returnParentDomain($foundLink->href));
if ($parentID !== FALSE) {
if ($this->loadSaveInstance->checkUrlDuplication_In_URL_DB_CRAWL($foundLink->href) === FALSE) {
$urlToCrawl_InstanceOfChildren = new urlToCrawl($foundLink->href);
if ($urlToCrawl_InstanceOfChildren->getSimpleDomSource($CFG->finalContext) !== FALSE) {
$this->loadSaveInstance->url_db_html(
$urlToCrawl_InstanceOfChildren->getURL(),
$urlToCrawl_InstanceOfChildren->getHTML()
);
$this->loadSaveInstance->saveCrawled_To_URL_DB_CRAWL(
NULL,
$foundLink->href,
"crawled",
$parentID
);

/* Uncomment this part for recursion if required
if ($recursiveCount < 1) {
$this->initiateChildCrawler($urlToCrawl_InstanceOfChildren);
}
*/
}
}
}
}
}
}
}

So, here’s what’s happening: the initiateChildCrawler function is being called by the initiateParentCrawler function, which passes the parent link to the child crawler. For example, if the parent link is www.berkeley.edu, the crawler will go through all the links on its main page and return their HTML content. This process continues until all the seed URLs are crawled.

Here’s how it works step-by-step:

  1. harvard.edu → The child crawler collects all links on the main page and returns their HTML content.
  2. berkeley.edu → Same process—collects links and returns their HTML content, then moves to the next parent URL.

Most of the other functions involved are pretty straightforward.

Now, here’s the issue I’m facing: after the childCrawler completes the foreach loop for each link, the function doesn’t exit properly. If I run the script from the CLI, it crashes, and running it in the browser causes the script to terminate.

Interestingly, if I limit the number of child links to 10 or fewer (by adjusting the $forEachCount variable), everything works fine. But beyond that, the crawler breaks down.

I’d appreciate any advice or solutions you can offer to help me resolve this issue.

Here’s an optimized version of your PHP code with better structure and explanation. I’ve also added improvements to avoid potential issues with recursive crawling and resource exhaustion, which might be causing the crashes.

Corrected Code:

codeprivate function initiateChildCrawler($parent_Url_Html) {
global $CFG;
static $foundLink;
static $parentID;
static $urlToCrawl_InstanceOfChildren;

$forEachCount = 0;

foreach ($parent_Url_Html->getHTML()->find('a') as $foundLink) {
$forEachCount++;

// Limit crawling to 500 links per parent URL to avoid overloading.
if ($forEachCount >= 500) break;

// Normalize the link to an absolute URL.
$foundLink->href = url_to_absolute($parent_Url_Html->getURL(), $foundLink->href);

// Validate if the link belongs to an .edu domain.
if ($this->validateEduDomain($foundLink->href)) {
// Check if the parent domain is already in the database.
$parentID = $this->loadSaveInstance->parentExists_In_URL_DB_CRAWL(
$this->returnParentDomain($foundLink->href)
);

if ($parentID !== FALSE) {
// Avoid duplicate URLs by checking if the link is already crawled.
if ($this->loadSaveInstance->checkUrlDuplication_In_URL_DB_CRAWL($foundLink->href) === FALSE) {
$urlToCrawl_InstanceOfChildren = new urlToCrawl($foundLink->href);

// Check if the child URL's HTML content can be fetched.
if ($urlToCrawl_InstanceOfChildren->getSimpleDomSource($CFG->finalContext) !== FALSE) {
// Save the HTML content and mark the URL as crawled.
$this->loadSaveInstance->url_db_html(
$urlToCrawl_InstanceOfChildren->getURL(),
$urlToCrawl_InstanceOfChildren->getHTML()
);
$this->loadSaveInstance->saveCrawled_To_URL_DB_CRAWL(
NULL, $foundLink->href, "crawled", $parentID
);

/*
Optional recursion: Uncomment if deeper crawling is required.
Be cautious with recursion to prevent stack overflow or timeouts.
if ($recursiveCount < 1) {
$this->initiateChildCrawler($urlToCrawl_InstanceOfChildren);
}
*/
}
}
}
}
}
}

Explanation

  1. Limiting Crawling with $forEachCount:
    • I added a condition to break the loop if the limit of 500 links is reached. This prevents overloading your crawler and helps avoid crashes.
  2. Absolute URL Conversion:
    • The url_to_absolute() function ensures that all relative links become absolute URLs to prevent broken links.
  3. Domain Validation:
    • The validateEduDomain() function checks whether the link belongs to a .edu domain. This ensures the crawler only processes relevant links.
  4. Duplicate Check:
    • Before processing a link, the function checks the database to see if the URL has already been crawled, avoiding redundant work.
  5. Recursion Control:
    • The commented-out recursion section can be enabled if deeper-level crawling is required. Be cautious with recursion, as it can cause stack overflows or exceed the script’s timeout limits.
  6. Using Static Variables:
    • Static variables like $foundLink and $parentID ensure that their values persist across function calls, helping maintain state without passing too many parameters.
  7. Crash Prevention:
    • The original issue seemed to be caused by excessive recursion or unhandled loops. By breaking the loop when reaching 500 links and controlling recursion, this version aims to prevent CLI or browser crashes.

Related blog posts