Symbolic verification of web crawler functionality and its properties
Now a days people use search engines every now and then to retrieve documents from the Web. Web crawling is the process by which a search engine gather pages from the Web to index them and support a search engine. Web crawlers are the heart of search engines. Web crawlers continuously keep on crawli...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Now a days people use search engines every now and then to retrieve documents from the Web. Web crawling is the process by which a search engine gather pages from the Web to index them and support a search engine. Web crawlers are the heart of search engines. Web crawlers continuously keep on crawling the web and find any new web pages that have been added to the web, pages that have been removed from the web. Due to growing and dynamic nature of the web, it has become a challenge to traverse through all URLs in the web documents and to handle these URLs. The entire crawling process may be viewed as traversing a web graph. The aim of this paper is to model check the crawling process and crawler properties using a symbolic model checker tool called NuSMV. The basic operation of a hypertext crawler and the crawler properties has been modeled in terms of CTL specification and it is observed that the system takes care of all the constraints by satisfying all the specifications. |
---|---|
DOI: | 10.1109/ICCCI.2012.6158649 |