Papers by Dr.Uma Pavan Kumar Kethavarapu
Ontologies are basically a specification of conceptualization for global understanding of common ... more Ontologies are basically a specification of conceptualization for global understanding of common concepts and relations. The benefit of ontologies is common usage of concepts, sub concepts and relations. The main theme of this article is to give the importance of ontologies and the related models or frame works available in that field, the automatic construction of ontologies and their advantages, integration of various ontologies along with most commonly used methods. We are also exploring some basic aspects of dynamicontology, which is the evolutionary nature through which we can have the up to date ontology in the context of source and conceptual model. The main focus of our paper is to list out the existing popular works in the field of ontology creation, integration and dynamic ontology context. The idea of our research is to give acomprehensive study on the research of ontologies through which it is easy to get the future directions and scope in this field.
In this paper we are discussing the automatic ontology construction method for job recommendation... more In this paper we are discussing the automatic ontology construction method for job recommendation system and based on the historical data of the users and their preferences the dynamic notification about the suitable jobs will be posted. The ontology construction we have done automatically and based on the constructed data we are extracting the suitable jobs for the users. The system mainly focuses on generation of the notifications to the users by analyzing the available content in the user context and by taking the job posting forms the job portal context. Here the notifications will be helpful for both job seekers and job portals. Generally employers are spending much time and money for filtering the suitable persons for a job requirement. Similarly job seekers also spending more time for getting the suitable notification related to their skills and experience. Our job recommendation system is a solution for the above problems as we are giving a perfect solution of getting the re...
Data mining and knowledge engineering, 2011
In this paper we are going to present the concept of distributed data mining and the advantages o... more In this paper we are going to present the concept of distributed data mining and the advantages of combined architecture which involves the concept of distribution as a backbone and the process of mining. To demonstrate the Distributed Data Mining concept we are presenting architecture with various levels of users, data, and networks. In this we also have had the usage of DMQL (Distributed Mining Query Language) and converter mechanism. After getting the required data we mentioned the security layer in the architecture so as to oppose the malicious software or data. The entire architecture depends on the usage of cloud computing which involves the migration of the different kinds of network data mining process
2017 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC), 2017
Data quality comes from consistency which is dependent on reliable source. The concept of ontolog... more Data quality comes from consistency which is dependent on reliable source. The concept of ontology is used to represent a common knowledge sharing to various users in the distributed manner. In the literature we can see various ontologies like medical, biotechnology and automobile. All these ontologies allows various levels of users so as to share the knowledge, but unfortunately the construction of automatic ontology, change management in the source so as to track the modifications and update the changes which leads to dynamic ontologies are very complex. The current article explains the automatic ontology construction in simple and efficient manner along with the method of change management in the flexible way. The dynamic ontology construction is the leading aspect of the article, by taking the changes at the source we are merged the changes to the ontology. The importance of the research is to build a recommendation system with job portals data which will help the recruiters and job seekers in searching of jobs, and more over there is no common framework of activities to tag automatic ontology construction, change tracking to identify the modifications in source data and construction of dynamic ontology. We believe that the work outperforming when compared with other ontology construction methods. The innovation here is a frame work that holds all the activities like ontology construction, change management and dynamic ontology generation.
International Journal of Knowledge Engineering and Soft Data Paradigms, 2016
The ontology is a common place where we can get the shared knowledge about the common concepts. I... more The ontology is a common place where we can get the shared knowledge about the common concepts. In medical domains, university data bases and in the study of botanical areas of research we can see the applications of ontologies. The main objective of our research is to propose a job recommendation system based on the ontology. The process here is construction of the ontology from various job portals and notifying the result set to both job seekers and employers. The important attempt that here we are describing is to adapt the dynamism to the ontology with the idea of slowly changing source detection based on lookup and update strategy. Look up will identify the change happens or not, whereas the update strategy tracks the change in terms of updating, deletion or insertion of the source. The benefit of our work is dynamic data management provision to the ontology constructed. We believe that our method will give the best results in case of accuracy and time parameters in the job notifications.
Proceedings of the International Conference on Informatics and Analytics, 2016
The main focus of our research work is to construct the Job Recommendation System (JRS) based on ... more The main focus of our research work is to construct the Job Recommendation System (JRS) based on the ontology construction. The current discussion is related to web data extraction process in the construction of ontology. The importance of this concept is how best data can be extracted from various web pages to minimize the time requirement and improve the efficiency. The process followed to construct ontology is first identified various sources of job portals, then extraction of data from those portals for that the proposed a method is FSA(Finite State Automata) and NLP(Natural Language Processing) based extraction of data. The outcome of this method is efficient data extraction in terms of time and space usage when compared with other models.
Procedia Computer Science, 2016
The basis of our research is to construct a job recommendation system to the job seekers by colle... more The basis of our research is to construct a job recommendation system to the job seekers by collecting the job portals data. Due to huge amounts of the data in job portals the employers are facing difficulty in the identification of right candidate for the required skill and experience. The job seekers are also facing the problem of getting the suitability of the job based on their skill and experience. The knowledge acquisition based on the requirements is very difficult in case of huge amounts of the data sources. In fact classical development of domain ontology is typically entirely based on strong human participation. It does not adequately fit new applications requirements, because they need a more dynamic ontology and the possibility to manage a considerable quantity of concepts that human cannot achieve alone. The main focus of our work is to generate a job recommendation system with the details of job by taking account into the data posted in the web sites and data from the job seekers by the creation of dynamic ontology. We strongly believe that our system will give the best outcome in case of suitable job recommendation for both employers and job seekers without spending much time. To achieve this first we have extracted the data from various web pages and stored the collected data into .csv files. In the second stage the stored input files are used by the similarity measure and ontology creation module by generating the corresponding Web Ontology Language (.owl) file. The third stage is creating the ontology with the generated .owl by using protégé tool .
Advances in Space Research, 1993
Geographical Information System (GIS) is an efficient tool for collecting, storing, retrieving tr... more Geographical Information System (GIS) is an efficient tool for collecting, storing, retrieving transforming and displaying spatial data and also non-spatial data from the real world for a particular set of purposes. Using GIS approach an attempt has been made to select suitable sites for checkdams for harvesting rain water. In this present study Alur taluk of Hassan district, Karnataka, India covering 400 sq. km. area was choosen. This area receives good rainfall annually, but due to the hilly terrain runoff is high. In spite of the natural gift of good rainfall, the people in this area are able to grow mainly rainfed crop as there are few irrigation facilities. Hence surface water harvesting was given priority and a suitable methodology has been developed for selecting sites for constructing checkdams. Various thematic maps of the study area on 1:50,000 scale were input and analysed using ARC/INFO GIS package. Second and third order streams were selected and superimposed over lineament (fracture) map to avoid such segments of streams which are controlled by fractures to avoid weak zone! seepages. Suitable sites were selected depending on contour and slope. Probable Waterspread Area (WSA) and Immediate Downstream Area (IDA) were drawn. These polygons were intersected with different thematic maps to find out the suitability. Before intersecting, appropriate weightages (least suitable to most suitable in the scale of 1 to 5) were assigned to different classes of thematic maps keeping in mind that worst area only gets submerged in case of WSA while in the case of the IDA, weightages were assigned keeping in view of the requirement of water. Normalized CummulativeWeighted Index (NCWI) was developed to find out the suitability of the sites from the view point of natural resources. All the seven sites selected were found quite reasonable based on the NCWI values. The villages which will get benefitted were identified. Field check has been carried out and it is observed that the sites are quite suitable.
International Journal of Engineering & Technology, 2018
The new trend in the research and real time applications is Internet of Things (IOT). The functio... more The new trend in the research and real time applications is Internet of Things (IOT). The functional benefits of IOT are ranging from smart house to smart cities. The main purpose of IOT is to integrate various devices logically and interacting between the devices without human intervention. The current discussion mainly focuses on leveraging the capacity of analytics in IOT and resolves the storage issues of the bulk data generated by IOT. The proposed idea gives the usage of Hadoop platform to store the data and from that data performing analytics for the sake of better utilization of IOT communications. The importance is explained with some real time scenarios where there is perfect blend of Hadoop platform and IOT. To store the various categories of the data Hadoop Distributed File System (HDFS) can be used, and to ingest the data from external platforms we can make use of Sqoop or Flume. The data available in HDFS can be used to process with the usage of Map Reduce (MR)techniqu...
The terms machine learning, deep learning and data science are buzz words now a days. The usage o... more The terms machine learning, deep learning and data science are buzz words now a days. The usage of these techniques with some technologies like R and Python is most common in the industry and academics. The current work is dealing with the inherent logics existing in the algorithms like Classification, Dimensionality reduction and Recommender systems along with the suitable examples. Some of the applications mentioned here like Facebook, Twitter and LinkedIn to exploit the usage of these algorithms in their daily usage. The discussion about online platforms like Amazon, Flipkart are other areas where the recommender systems were most commonly used algorithms. The outcome of the work is the logical things hidden in the usage of the algorithms and the implementation wise which are packages and functions helpful for the implementation of the algorithms. The belief is the work will be helpful for the researchers and academicians in the context of algorithmic perspective and they can extend the work by contributing their thoughts and views on the same work. Unlike in the normal programming, R/Python simplifies the logic of algorithms so that the lines of code and understanding of the problem is bit simple when compared with general programming languages. The work explains the mail respondents related to the allocation of the house by the company as a response to their mail by considering Urban, semi-urban and rural areas of the customers, the income range of the customers also observed in the allocation of the house. The implementations are with R by using classification and the corresponding results were published with the explanation of the values found in the implementation.
Emerging Research in Data Engineering Systems and Computer Communications
The trend in the current industry and academics is machine learning (ML), and it is not hype but ... more The trend in the current industry and academics is machine learning (ML), and it is not hype but the reality. The market requirements are not limited by the earlier computing and storage models if those can be integrated with ML algorithms. The requirements of current industry, research and other domains like banking, finance, retail and medical have to depend on the collection of huge amounts of the data and analyse the data to better serve the stakeholders. The organizations have to focus on better storage models and advance logics of ML to meet the current needs of the people and usage of the limited amount of time to handle the huge amounts of the data. The ML research is moving in a high potential with the involvement of the social media like Facebook, Twitter and Google. These are not only using the ML perspective in their applications but also contributing to the development of new algorithms and API in the context of big data, ML and deep learning. The paper objective is to walk through the ML, big data along with deep learning fundamentals with the specification of various algorithms in the reference of above-mentioned social media with their contributions in the development of ML landscape along with the API (TensorFlow), services like priority inbox and the algorithms like RankBrain. The discussion will help the researchers and academicians so as to get the overview and detailed significance of ML research and the importance of ML in various dimensions. All these algorithms, tools and services are indirectly dependent on the artificial intelligence (AI) to frame the rules and to fine-tune the rules as per the requirements.
Lecture Notes in Networks and Systems, 2020
The research and development of artificial intelligence (AI) involve a generic representation kno... more The research and development of artificial intelligence (AI) involve a generic representation known as ontology. Ontology is a generic representation of the domain data like medical ontology, manufacturing ontology, etc. The present work deals with the construction of the ontologies with Web pages like Naukri, Monster and Time Jobs Web sites, and the name of the ontology is job portals ontology. The reason behind the development of this work is to propose simple methods and models to create the ontology, to handle the changes happened to the source data and to apply the changes dynamically to the existing ontology. The concept of dynamic ontology creates more impact on the research of ontology models. We believe that the change management aspect is also helpful to the researchers so as to track the changes caused in the source data, in this case the Web portal data. The other dimension of the work is integration of recommender systems with ontology model so as to emit the resultant technologies which are not having sufficient resources in the job market. The proposed work involves the simple way of ontology creation, change management and dynamic ontology creation based on the changes of the source data. The outcome of the work is ontology creation, change management, dynamic ontology generation and a brief description of recommender systems usage.
Procedia Computer Science, 2016
The critical challenge that the healthcare organizations are facing is to analyze the large-scale... more The critical challenge that the healthcare organizations are facing is to analyze the large-scale data. With the rapid growth of various healthcare applications, various devices used in healthcare generate varieties of data. The data need to be processed and effectively analyzed for better decision making. Cloud computing is a promising technology which can provide ondemand services for storage, processing and analyzing the data. The traditional data processing systems no longer has an ability to process such huge data. In order to achieve a better performance and to solve the scalability issues we need a better distributed system on cloud environment. Hadoop is a framework which can process large scale data sets on distributed environment. Hadoop can be deployed on cloud environment to process the large scale healthcare data. Healthcare applications are being supplied through internet and cloud services rather than using as traditional software. Healthcare providers need to have real time information to provide quality healthcare.This paper discuss on the impacts of data processing and analyzing large scale healthcare data on cloud computing environment.
International Journal of Psychosocial Rehabilitation
International Journal of Psychosocial Rehabilitation
The current usage of internet and various apps populating huge amounts of data, the mandate to st... more The current usage of internet and various apps populating huge amounts of data, the mandate to store this data to make use of this to various analytics. The analytics may involve click stream analysis, sentiment analysis and various recommendations to the customers. The ultimate goal is to reach the customers by analysing
— Data quality comes from consistency which is dependent on reliable source. The concept of ontol... more — Data quality comes from consistency which is dependent on reliable source. The concept of ontology is used to represent a common knowledge sharing to various users in the distributed manner. In the literature we can see various ontologies like medical, biotechnology and automobile. All these ontologies allows various levels of users so as to share the knowledge , but unfortunately the construction of automatic ontology ,change management in the source so as to track the modifications and update the changes which leads to dynamic ontologies are very complex. The current article explains the automatic ontology construction in simple and efficient manner along with the method of change management in the flexible way. The dynamic ontology construction is the leading aspect of the article, by taking the changes at the source we are merged the changes to the ontology. The importance of the research is to build a recommendation system with job portals data which will help the recruiters and job seekers in searching of jobs, and more over there is no common framework of activities to tag automatic ontology construction, change tracking to identify the modifications in source data and construction of dynamic ontology. We believe that the work outperforming when compared with other ontology construction methods. The innovation here is a frame work that holds all the activities like ontology construction, change management and dynamic ontology generation.
The advent of mobile revolution and sensor data resulted in the generation of huge amounts of the... more The advent of mobile revolution and sensor data resulted in the generation of huge amounts of the data from various levels of users and various categories of the applications. The main challenge of this context is storage logic and processing logic. The other important requirement is generation of the strategic and valuable information to estimate the user behaviour and their preferences. In this work an attempt is made to figure out how Hadoop eco system is used to store and process the data. To explain the process and storage relevant the usages of Map reduce and HDFS along with PIG and Hive are considered. The second objective of this work is, analytics on the populated data based on the reference of R and Machine Learning. The ultimate goal of this paper is to study about the computing scenarios existing in the processing of bulk data along with analytics mention.
Uploads
Papers by Dr.Uma Pavan Kumar Kethavarapu