The Hidden Web is the vast repository of informational databases available only through search fo... more The Hidden Web is the vast repository of informational databases available only through search form interfaces, accessible by therein typing a set of keywords in the search forms. Typically, a Hidden Web crawler is employed to autonomously discover and download pages from the Hidden Web. Traditional hidden web crawlers do not provide the search engines with an optimal search experience because of the excessive number of search requests posed through the form interface so as to exhaustively crawl and retrieve the contents of the target hidden web database. Here in our work, we provide a framework to investigate the problem of optimal search and curtail it by proposing an effective query term selection approach based on the frequency & distribution of terms in the document database. The paper focuses on developing a term-weighing scheme called VarDF (acronym for variable document frequency) that can ease the identification of optimal terms to be used as queries on the interface for ma...
With the ever proliferating size and scale of the WWW [1] efficient ways of exploring content are... more With the ever proliferating size and scale of the WWW [1] efficient ways of exploring content are of increasing importance. How can we efficiently retrieve information from it through crawling? And in this era of tera and multi-core processors, we ought to think of multi-threaded processes as a serving solution. So, even better how can we improve the crawling performance by using parallel crawlers that work independently? The paper devotes to the fundamental development in the field of parallel crawlers [4] highlighting the advantages and challenges arising from its design. The paper also focuses on the aspect of URL distribution among the various parallel crawling processes or threads and ordering the URLs within each distributed set of URLs. How to distribute URLs from the URL frontier to the various concurrently executing crawling process threads is an orthogonal problem. The paper provides a solution to the problem by designing a framework WebParF that partitions the URL frontie...
The World Wide Web is growing at an uncontrollable rate. Hundreds of thousands of web sites appea... more The World Wide Web is growing at an uncontrollable rate. Hundreds of thousands of web sites appear every day, with the added challenge of keeping the web directories up-to-date. Further, the uncontrolled nature of web presents difficulties for Web page classification. As the number of Internet users is growing, so is the need for classification of web pages with greater precision in order to present the users with web pages of their desired class. However, web page classification has been accomplished mostly by using textual categorization methods. Herein, we propose a novel approach for web page classification that uses the HTML information present in a web page for its classification. There are many ways of achieving classification of web pages into various domains. This paper proposes an entirely new dimension towards web page classification using Artificial Neural Networks (ANN).
International Journal of Information Retrieval Research, 2014
A huge number of Hidden Web databases exists over the WWW forming a massive source of high qualit... more A huge number of Hidden Web databases exists over the WWW forming a massive source of high quality information. Retrieval of this information for enriching the repository of the search engine is the prime target of a Hidden web crawler. Besides this, the crawler should perform this task at an affordable cost and resource utilization. This paper proposes a Random ranking mechanism whereby the queries to be raised by the hidden web crawler have been ranked. By ranking the queries according to the proposed mechanism, the Hidden Web crawler is able to make an optimal choice among the candidate queries and efficiently retrieve the Hidden web databases. The Hidden Web crawler proposed here also possesses an extensible and scalable framework to improve the efficiency of crawling. The proposed approach has also been compared with other methods of Hidden Web crawling existing in the literature.
International Journal of Computer Trends and Technology, 2014
A large amount of data on the WWW remains inaccessible to crawlers of Web search engines because ... more A large amount of data on the WWW remains inaccessible to crawlers of Web search engines because it can only be exposed on demand as users fill out and submit forms. The Hidden web refers to the collection of Web data which can be accessed by the crawler only through an interaction with the Web-based search form and not simply by traversing hyperlinks. Research on Hidden Web has emerged almost a decade ago with the main line being exploring ways to access the content in online databases that are usually hidden behind search forms. The efforts in the area mainly focus on designing hidden Web crawlers that focus on learning forms and filling them with meaningful values. The paper gives an insight into the various Hidden Web crawlers developed for the purpose giving a mention to the advantages and shortcoming of the techniques employed in each.
The Hidden Web is the vast repository of informational databases available only through search fo... more The Hidden Web is the vast repository of informational databases available only through search form interfaces, accessible by therein typing a set of keywords in the search forms. Typically, a Hidden Web crawler is employed to autonomously discover and download pages from the Hidden Web. Traditional hidden web crawlers do not provide the search engines with an optimal search experience because of the excessive number of search requests posed through the form interface so as to exhaustively crawl and retrieve the contents of the target hidden web database. Here in our work, we provide a framework to investigate the problem of optimal search and curtail it by proposing an effective query term selection approach based on the frequency & distribution of terms in the document database. The paper focuses on developing a term-weighing scheme called VarDF (acronym for variable document frequency) that can ease the identification of optimal terms to be used as queries on the interface for ma...
With the ever proliferating size and scale of the WWW [1] efficient ways of exploring content are... more With the ever proliferating size and scale of the WWW [1] efficient ways of exploring content are of increasing importance. How can we efficiently retrieve information from it through crawling? And in this era of tera and multi-core processors, we ought to think of multi-threaded processes as a serving solution. So, even better how can we improve the crawling performance by using parallel crawlers that work independently? The paper devotes to the fundamental development in the field of parallel crawlers [4] highlighting the advantages and challenges arising from its design. The paper also focuses on the aspect of URL distribution among the various parallel crawling processes or threads and ordering the URLs within each distributed set of URLs. How to distribute URLs from the URL frontier to the various concurrently executing crawling process threads is an orthogonal problem. The paper provides a solution to the problem by designing a framework WebParF that partitions the URL frontie...
The World Wide Web is growing at an uncontrollable rate. Hundreds of thousands of web sites appea... more The World Wide Web is growing at an uncontrollable rate. Hundreds of thousands of web sites appear every day, with the added challenge of keeping the web directories up-to-date. Further, the uncontrolled nature of web presents difficulties for Web page classification. As the number of Internet users is growing, so is the need for classification of web pages with greater precision in order to present the users with web pages of their desired class. However, web page classification has been accomplished mostly by using textual categorization methods. Herein, we propose a novel approach for web page classification that uses the HTML information present in a web page for its classification. There are many ways of achieving classification of web pages into various domains. This paper proposes an entirely new dimension towards web page classification using Artificial Neural Networks (ANN).
International Journal of Information Retrieval Research, 2014
A huge number of Hidden Web databases exists over the WWW forming a massive source of high qualit... more A huge number of Hidden Web databases exists over the WWW forming a massive source of high quality information. Retrieval of this information for enriching the repository of the search engine is the prime target of a Hidden web crawler. Besides this, the crawler should perform this task at an affordable cost and resource utilization. This paper proposes a Random ranking mechanism whereby the queries to be raised by the hidden web crawler have been ranked. By ranking the queries according to the proposed mechanism, the Hidden Web crawler is able to make an optimal choice among the candidate queries and efficiently retrieve the Hidden web databases. The Hidden Web crawler proposed here also possesses an extensible and scalable framework to improve the efficiency of crawling. The proposed approach has also been compared with other methods of Hidden Web crawling existing in the literature.
International Journal of Computer Trends and Technology, 2014
A large amount of data on the WWW remains inaccessible to crawlers of Web search engines because ... more A large amount of data on the WWW remains inaccessible to crawlers of Web search engines because it can only be exposed on demand as users fill out and submit forms. The Hidden web refers to the collection of Web data which can be accessed by the crawler only through an interaction with the Web-based search form and not simply by traversing hyperlinks. Research on Hidden Web has emerged almost a decade ago with the main line being exploring ways to access the content in online databases that are usually hidden behind search forms. The efforts in the area mainly focus on designing hidden Web crawlers that focus on learning forms and filling them with meaningful values. The paper gives an insight into the various Hidden Web crawlers developed for the purpose giving a mention to the advantages and shortcoming of the techniques employed in each.
Uploads
Papers by Sonali Gupta