Mining the Web's link structure

The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer (Long Beach, Calif.) Calif.), 1999-08, Vol.32 (8), p.60-67
Hauptverfasser: Chakrabarti, S., Dom, B.E., Kumar, S.R., Raghavan, P., Rajagopalan, S., Tomkins, A., Gibson, D., Kleinberg, J.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 67
container_issue 8
container_start_page 60
container_title Computer (Long Beach, Calif.)
container_volume 32
creator Chakrabarti, S.
Dom, B.E.
Kumar, S.R.
Raghavan, P.
Rajagopalan, S.
Tomkins, A.
Gibson, D.
Kleinberg, J.
description The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an "off-the-shelf" database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated.
doi_str_mv 10.1109/2.781636
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_2_781636</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>781636</ieee_id><sourcerecordid>28790754</sourcerecordid><originalsourceid>FETCH-LOGICAL-c365t-986f42cf613f8ab4b8ea11c75fd704851e1d185fe6e3a1d81cb2a0205021dc243</originalsourceid><addsrcrecordid>eNqF0D1PwzAQBmALgUQpSMwsRAyUJcXn2I49ooovqYgFxGglzhlS0gTsZODf4yoVAwNMp9M9utO9hBwDnQNQfcnmuQKZyR0yASFUShXwXTKhFFSqQbJ9chDCKrZciWxCTh_qtm5fk_4NkxcsZyFp6vY9Cb0fbD94PCR7rmgCHm3rlDzfXD8t7tLl4-394mqZ2kyKPtVKOs6sk5A5VZS8VFgA2Fy4KqfxEiBUoIRDiVkBlQJbsoIyKiiDyjKeTcls3Pvhu88BQ2_WdbDYNEWL3RCMBq1prhhEef6nZCqPUvD_YU5jHlxFePYLrrrBt_FdAzrnGdN6gy5GZH0XgkdnPny9LvyXAWo2yRtmxuQjPRlpjYg_bDv8BuB6ehM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>197432998</pqid></control><display><type>article</type><title>Mining the Web's link structure</title><source>IEEE Electronic Library (IEL)</source><creator>Chakrabarti, S. ; Dom, B.E. ; Kumar, S.R. ; Raghavan, P. ; Rajagopalan, S. ; Tomkins, A. ; Gibson, D. ; Kleinberg, J.</creator><creatorcontrib>Chakrabarti, S. ; Dom, B.E. ; Kumar, S.R. ; Raghavan, P. ; Rajagopalan, S. ; Tomkins, A. ; Gibson, D. ; Kleinberg, J.</creatorcontrib><description>The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an "off-the-shelf" database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated.</description><identifier>ISSN: 0018-9162</identifier><identifier>EISSN: 1558-0814</identifier><identifier>DOI: 10.1109/2.781636</identifier><identifier>CODEN: CPTRB4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithm design and analysis ; Analysis ; Construction ; Engines ; Humans ; Information analysis ; Information management ; Information resources ; Links ; Search engines ; Searching ; Tasks ; Web pages ; Web search ; Web sites ; Websites ; World Wide Web</subject><ispartof>Computer (Long Beach, Calif.), 1999-08, Vol.32 (8), p.60-67</ispartof><rights>Copyright Institute of Electrical and Electronics Engineers, Inc. (IEEE) Aug 1999</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c365t-986f42cf613f8ab4b8ea11c75fd704851e1d185fe6e3a1d81cb2a0205021dc243</citedby><cites>FETCH-LOGICAL-c365t-986f42cf613f8ab4b8ea11c75fd704851e1d185fe6e3a1d81cb2a0205021dc243</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/781636$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27901,27902,54733</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/781636$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chakrabarti, S.</creatorcontrib><creatorcontrib>Dom, B.E.</creatorcontrib><creatorcontrib>Kumar, S.R.</creatorcontrib><creatorcontrib>Raghavan, P.</creatorcontrib><creatorcontrib>Rajagopalan, S.</creatorcontrib><creatorcontrib>Tomkins, A.</creatorcontrib><creatorcontrib>Gibson, D.</creatorcontrib><creatorcontrib>Kleinberg, J.</creatorcontrib><title>Mining the Web's link structure</title><title>Computer (Long Beach, Calif.)</title><addtitle>MC</addtitle><description>The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an "off-the-shelf" database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated.</description><subject>Algorithm design and analysis</subject><subject>Analysis</subject><subject>Construction</subject><subject>Engines</subject><subject>Humans</subject><subject>Information analysis</subject><subject>Information management</subject><subject>Information resources</subject><subject>Links</subject><subject>Search engines</subject><subject>Searching</subject><subject>Tasks</subject><subject>Web pages</subject><subject>Web search</subject><subject>Web sites</subject><subject>Websites</subject><subject>World Wide Web</subject><issn>0018-9162</issn><issn>1558-0814</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>1999</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNqF0D1PwzAQBmALgUQpSMwsRAyUJcXn2I49ooovqYgFxGglzhlS0gTsZODf4yoVAwNMp9M9utO9hBwDnQNQfcnmuQKZyR0yASFUShXwXTKhFFSqQbJ9chDCKrZciWxCTh_qtm5fk_4NkxcsZyFp6vY9Cb0fbD94PCR7rmgCHm3rlDzfXD8t7tLl4-394mqZ2kyKPtVKOs6sk5A5VZS8VFgA2Fy4KqfxEiBUoIRDiVkBlQJbsoIyKiiDyjKeTcls3Pvhu88BQ2_WdbDYNEWL3RCMBq1prhhEef6nZCqPUvD_YU5jHlxFePYLrrrBt_FdAzrnGdN6gy5GZH0XgkdnPny9LvyXAWo2yRtmxuQjPRlpjYg_bDv8BuB6ehM</recordid><startdate>19990801</startdate><enddate>19990801</enddate><creator>Chakrabarti, S.</creator><creator>Dom, B.E.</creator><creator>Kumar, S.R.</creator><creator>Raghavan, P.</creator><creator>Rajagopalan, S.</creator><creator>Tomkins, A.</creator><creator>Gibson, D.</creator><creator>Kleinberg, J.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>F28</scope><scope>FR3</scope><scope>KR7</scope></search><sort><creationdate>19990801</creationdate><title>Mining the Web's link structure</title><author>Chakrabarti, S. ; Dom, B.E. ; Kumar, S.R. ; Raghavan, P. ; Rajagopalan, S. ; Tomkins, A. ; Gibson, D. ; Kleinberg, J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c365t-986f42cf613f8ab4b8ea11c75fd704851e1d185fe6e3a1d81cb2a0205021dc243</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>1999</creationdate><topic>Algorithm design and analysis</topic><topic>Analysis</topic><topic>Construction</topic><topic>Engines</topic><topic>Humans</topic><topic>Information analysis</topic><topic>Information management</topic><topic>Information resources</topic><topic>Links</topic><topic>Search engines</topic><topic>Searching</topic><topic>Tasks</topic><topic>Web pages</topic><topic>Web search</topic><topic>Web sites</topic><topic>Websites</topic><topic>World Wide Web</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chakrabarti, S.</creatorcontrib><creatorcontrib>Dom, B.E.</creatorcontrib><creatorcontrib>Kumar, S.R.</creatorcontrib><creatorcontrib>Raghavan, P.</creatorcontrib><creatorcontrib>Rajagopalan, S.</creatorcontrib><creatorcontrib>Tomkins, A.</creatorcontrib><creatorcontrib>Gibson, D.</creatorcontrib><creatorcontrib>Kleinberg, J.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Civil Engineering Abstracts</collection><jtitle>Computer (Long Beach, Calif.)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chakrabarti, S.</au><au>Dom, B.E.</au><au>Kumar, S.R.</au><au>Raghavan, P.</au><au>Rajagopalan, S.</au><au>Tomkins, A.</au><au>Gibson, D.</au><au>Kleinberg, J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mining the Web's link structure</atitle><jtitle>Computer (Long Beach, Calif.)</jtitle><stitle>MC</stitle><date>1999-08-01</date><risdate>1999</risdate><volume>32</volume><issue>8</issue><spage>60</spage><epage>67</epage><pages>60-67</pages><issn>0018-9162</issn><eissn>1558-0814</eissn><coden>CPTRB4</coden><abstract>The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an "off-the-shelf" database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/2.781636</doi><tpages>8</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 0018-9162
ispartof Computer (Long Beach, Calif.), 1999-08, Vol.32 (8), p.60-67
issn 0018-9162
1558-0814
language eng
recordid cdi_crossref_primary_10_1109_2_781636
source IEEE Electronic Library (IEL)
subjects Algorithm design and analysis
Analysis
Construction
Engines
Humans
Information analysis
Information management
Information resources
Links
Search engines
Searching
Tasks
Web pages
Web search
Web sites
Websites
World Wide Web
title Mining the Web's link structure
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T07%3A18%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mining%20the%20Web's%20link%20structure&rft.jtitle=Computer%20(Long%20Beach,%20Calif.)&rft.au=Chakrabarti,%20S.&rft.date=1999-08-01&rft.volume=32&rft.issue=8&rft.spage=60&rft.epage=67&rft.pages=60-67&rft.issn=0018-9162&rft.eissn=1558-0814&rft.coden=CPTRB4&rft_id=info:doi/10.1109/2.781636&rft_dat=%3Cproquest_RIE%3E28790754%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=197432998&rft_id=info:pmid/&rft_ieee_id=781636&rfr_iscdi=true