Pages : 241-251
M.B. Hammawa* and G. Sampson
View: Abstract |
PDF
| XML|
In this paper we considered several frameworks for data mining. These frameworks are based on different approaches, including inductive databases approach, the reductionist statistical approaches, data compression approach, constructive induction approach and some others. We considered advantages and limitations of these frameworks. We presented the view on data mining research as continuous and never- ending development process of an adaptive DM system towards the efficient utilization of available DM techniques for solving a current problem impacted by the dynamically changing environment. We discussed one of the traditional information systems frameworks and, drawing the analogy to this framework, we considered a data mining system as the special kind of adaptive information system. We adapted the information systems development framework for the context of data-mining systems development.
Hide Abstract
PDF Downloads:
984
[hupso]
Pages : 253-259
Bijan Rouhi¹*, Peiman Ghasemi² and Amin Ghorbani³
View: Abstract |
PDF
| XML|
In recent years, much research has been devoted to the deployment of cache coherence; however, few have developed the construction of e-business [26], [26]. After years of confusing research into DHTs, we validate the synthesis of multicast solutions, which embodies the appropriate principles of compact complexity theory. We propose new concurrent epistemologies, which we call Retina. Although it at first glance seems unexpected, it is buffetted by related work in the field
Hide Abstract
PDF Downloads:
985
[hupso]
Pages : 261-272
Muqeem Ahmed*, S.Z. Hussain and S.A.M. Rizvi
View: Abstract |
PDF
| XML|
In spite of various current research and investigations the development of advenced information technology is not the key issue. The different information technologies available now days but the major issue is how to got more advantages and utilization of these technologies for academic purpose in distributed environment where faculty and students communicate with software technology rather than with individual. Knowledge Based Grid was introduced for publishing, managing, sharing and utilizing different amount of knowledge base resources on the semantic web in distributed environment. The Knowledge discovery from heterogeneous information sources available on knowledge Grid environment is a major challenging research and development issue. This paper mainly concerns all aspects of the knowledge discovery, sharing process and integrates grid data resource by ontology server for educational institutes and university in distributed environment to address these issues and challenge.
Hide Abstract
PDF Downloads:
996
[hupso]
Pages : 273-280
M.B. Hammawa*
View: Abstract |
PDF
| XML|
Currently, huge electronic data repositories are being maintained by banks and other financial institutions. Valuable bits of information are embedded in these data repositories. The huge size of these data sources make it impossible for a human analyst to come up with interesting information (or patterns) that will help in the decision making process. A number of commercial enterprises have been quick to recognize the value of this concept, as a consequence of which the software market itself for data mining is expected to be in excess of 10 billion USD. This note is intended for bankers, who would like to get aware of the possible applications of data mining to enhance the performance of some of their core business processes. In this note, the author discusses broad areas of application, like risk management, portfolio management, trading, customer profiling and customer care, where data mining techniques can be used in banks and other financial institutions to enhance their business performance.
Hide Abstract
PDF Downloads:
1186
[hupso]
Pages : 281-292
Sarita Singh Bhadauria¹*, Abhay Kothari² and Lalji Prasad³
View: Abstract |
PDF
| XML|
Object-orientation involving class and object concepts and their properties play an important role in constructing any object-orientated system. In this research work, a comprehensive class diagram is provided that may help in designing a comprehensive software testing tool. A requirement specification for a comprehensive software testing tool is established that would involve studying the feature set offered by existing software testing tools, along with their limitations. The requirement set thus developed will be capable of overcoming the limitations of the limited feature sets of existing software tools and will also contribute to the design of a comprehensive architecture class diagram for a software testing tool that includes most of the features required for a software testing tool (most of the testing techniques came from procedural and object-oriented programming system development). In addition, because different user interfaces are provided by different tools, an effort has been made to use them in the present system that is being designed.
Hide Abstract
PDF Downloads:
996
[hupso]
Pages : 293-304
T. Lalitha¹* and R. Umarani²
View: Abstract |
PDF
| XML|
Wireless Sensor Networks (WSN) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the WSN communications. Due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node, key management in wireless sensor networks has become a complex task. Limited memory resources and energy constraints are the other issues of key management in WSN. Hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy. In this paper, we develop a cluster based technique for key management in wireless sensor network. Initially, clusters are formed in the network and the cluster heads are selected based on the energy cost, coverage and processing capacity. The sink assigns cluster key to every cluster and an EBS key set to every cluster head. The EBS key set contains the pairwise keys for intra-cluster and inter-cluster communication. During data transmission towards the sink, the data is made to pass through two phases of encryption thus ensuring security in the network. By simulation results, we show that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.
Hide Abstract
PDF Downloads:
982
[hupso]
Pages : 317-327
Roli Pradhan
View: Abstract |
PDF
| XML|
The prediction of corporate bankruptcies is an important and widely studied topic since it can have significant impact on bank lending decisions and profitability. This work presents two contributions. First we review the topic of bankruptcy prediction, with emphasis on different models. Second, Inspired by the traditional credit risk models developed, we propose novel indicators for the NN system. Thereafter, this paper using the tailored back-propagation neural network endeavors to predict the financial ratios expressing the position of a firm to regulate the bankruptcy and assess the credit risks. It first estimates the financial ratio for a firm from 2001- 2008 to the train the BPNN and uses the estimates of the year 2009 and 2010 values for the validation process. Finally it dwells to draw predictions for the period 2011-2015 and emphasizes the growing role of BPNN application based prediction models for banking sector with a case study of AXIS bank. We conclude with practical suggestions on how best to integrate models and research into policy making decisions.
Hide Abstract
PDF Downloads:
931
[hupso]
Pages : 329-339
Abhishek Roy and Sunil Karforma
View: Abstract |
PDF
| XML|
With the advancement of Information and Communication Technology (ICT), Information has become the most easily accessible yet very valuable commodity. Since the successful implementation of various electronic mechanisms like E-Governance, E-Commerce, E-Learning, E-Health, MGovernance, M-Insurance, etc are totally dependable on the security and authenticity of the information, it is very much susceptible to interceptions and alterations caused by the hackers. In this paper the authors have done a through study of the various risk factors of the information security and their probable remedies using various cryptographic algorithms so that the above mentioned E-mechanisms can be implemented with utmost Privacy, Integrity, Non-Repudiation and Authentication (PINA).
Hide Abstract
PDF Downloads:
1011
[hupso]
Pages : 341-349
S.S. Asadi1*, B.V.T. Vasantha Rao2, M.V. Raju3 and P. Neela Rani4
View: Abstract |
PDF
| XML|
The demand for Natural resources is increasing day by day due to increasing population , rapid urbanization, industrial growth and agricultural utilization. The levels of Groundwater is decreasing over years due to all the above activities and decreasing of annual rainfall year by year due to climatic changes and increasing runoff due to urbanization and deforestation. Hence, it is necessary to increase the land and water resources levels for future demands. Keeping this in view, we have done a model study for Socio-Economic condition and mapping of Landuse/Land cover and geomorphology characteristics study . The Study area is situated at East siang district of Arunachalpradesh falling in SOI toposheet no. 83I/13,14,82L/5,10,11,14,16,82P/ 2,3,4,7,8,11,12,83M/1,5,9. The present study was carried out to delineate Landuse/Land cover and Geomorphology cares tics IRS-ID PAN and LISS-III geocoded data on 1:50000 scale. Geographical Information System was used to prepare database on the above layers, analysis of relationship and integrated map preparation. The study area has a complex geomorphology. On the basis of geomorphic characteristics. The study has focused the utility of remote sensing data in creation of socioeconomic condition data and identification of Land use/Land cover Geomorphology class even in a complex terrain like the study area. The result in the form of integrated map could be properly analyzed using the advantage of technology like GIS as the methodology, which includes analysis of many resources and their interpretation. In the final maps, identified different class of Land use/ Land cover and geomorphology in the study areas to meet future demand and proper utilization of resources.
Hide Abstract
PDF Downloads:
1135
[hupso]
Pages : 351-360
Subul Aijaz¹, Mayuri Pandey¹ and Sana Iqbal²
View: Abstract |
PDF
| XML|
Many words have been written about the dangers of advanced nanotechnology. Most of the threatening scenarios involve tiny manufacturing systems that run amok, or are used to create destructive products. A manufacturing infrastructure built around a centrally controlled, relatively large, self-contained manufacturing system would avoid these problems. A controlled nanofactory would pose no inherent danger, and it could be deployed and used widely. Cheap, clean, convenient, on-site manufacturing would be possible without the risks associated with uncontrolled nanotech fabrication or excessive regulation. Control of the products could be administered by a central authority; intellectual property rights could be respected. In addition, restricted design software could allow unrestricted innovation while limiting the capabilities of the final products. The proposed solution appears to preserve the benefits of advanced nanotechnology while minimizing the most serious risks.
Hide Abstract
PDF Downloads:
954
[hupso]
Pages : 361-369
Suzen S. Kallungal
View: Abstract |
PDF
| XML|
Automatic fault detection is mainly for applications in the automotive industry. A fault detection system based on multivariate data analysis is needed to increase data reliability and for the purpose of monitoring and controlling of test equipment. The detection scheme has to process different measurements at a time and check them for consistency. An important requirement for the fault detection scheme is that it should be able to automatically adapt itself to new data with high level of accuracy that may not always be achieved manually. The project related to this paper was intended to work on real-time parameters read from high power automotives, especially JCBs used in construction industry. Various parameters including: temperatures; pressures; oil levels; states of the valves are monitored and sent to a server. Results showed that automatic fault detection through neural network system is useful as it saves time, cost and detects faults accurately.
Hide Abstract
PDF Downloads:
948
[hupso]
Pages : 371-377
Shatadal Patro¹ and Asha Ambhaikar²*
View: Abstract |
PDF
| XML|
Mobile database applications through wireless equipments e.g., PDAs, laptops, cell phones and etc. are growing rapidly. In such environment, clients, servers and object may change their locations. A very applicable class of query is continuous k-NN query which continuously returns the k nearest objects to the current location of the requester. Respect to limitations in mobile environments, it is strongly recommended to minimize number of connections and volume of data transmission from the servers. Caching seems to be very profitable in such situations. In this paper, a enhanced cache grid partitioning technique for continuous k-NN queries in mobile DBSs is proposed. In this, by square grid partitioning the complete search space is divided into such grid areas so that we can impose a piecemeal ordering on the query targets. Simulation results show that the proposed cache grid partitioning schema provides a considerable improvement in response time, number of connections and volume of transferred data from DB server.
Hide Abstract
PDF Downloads:
961
[hupso]
Pages : 379-385
Sandeep Sharma, Ruchi Dave and Naveen Hemrajani
View: Abstract |
PDF
| XML|
This paper presents a technique to improve the quality of Document Clustering based on Word Set Concept. The proposed Technique WDC (word set based document clustering), a clustering algorithm work with to obtain clustering of comparable quality significantly more efficiently more than the state of the art text clustering algorithm. The proposed WDC algorithms utilize the semantic relation ship between words to create concepts. The Word sets based Document Clustering (WDC) obtains clustering of comparable quality significantly more efficiently than state-of-art approach is efficient and give more accurate clustering result than the other methods.
Hide Abstract
PDF Downloads:
962
[hupso]
Pages : 393-398
Poonam Dhamal
View: Abstract |
PDF
| XML|
Vehicular Ad Hoc Networks (VANET) is a subclass of Mobile ad hoc networks which provides a distinguished approach for Intelligent Transport System (ITS). The survey of routing protocols in VANET is important and necessary for smart ITS. This paper discusses the advantages / disadvantages and the applications of various routing protocols for vehicular ad hoc networks. It explores the motivation behind the designed, and traces the evolution of these routing protocols. This paper discusses the main 5 types of protocols for VANET Topology Based, Positioned Based, Geo Cast, Broad Cast, and Cluster Based Protocols. It also discusses the types of Broadcast Protocols like multi hop and reliable broadcast protocols.
Hide Abstract
PDF Downloads:
969
[hupso]
Pages : 387-392
Shiv Kumar Gupta¹, Ramesh C. Poonia¹ and Ritu Vijay²
View: Abstract |
PDF
| XML|
As the coming generation is going to be the mobile communication technology Generation. If we talk about the communication, at present we are fully equipped with 2G and have started using 3G. But time is not so far when we deal with 4G and it’s beyond mobile communication technology. In this paper we have just tried to present the brief overview and limitations of existing systems and how generation next mobile communication technology is going to start new evolution. The G Next revolution, we call it 4G. In this I have discuss the overview, features of 4G, its future vision and scope. This paper will help the new scholars to enhance their knowledge about the different mobile communication existing and G Next systems.
Hide Abstract
PDF Downloads:
966
[hupso]
Pages : 399-404
Abhishek Kumar Singh and Abhay Kothari
View: Abstract |
PDF
| XML|
With the appearance and expansion of Internet subscribers all over the world, ISPs services are becoming more popular. The rapid increase of connection-demand and highly traffic network is the main reason behind the need to scale reliable network. To offer better solutions, a new theoretical and practical approach should be considered that can cover the reliable network.
Hide Abstract
PDF Downloads:
1070
[hupso]
Pages : 405-410
M. Lawanya Shri
View: Abstract |
PDF
| XML|
Cloud computing is a virtualized image containing Internet based technology that has become an increasingly important rend, by offering the sharing resources that include infrastructures, software, applications and business processes to the market environment to match elastic demand and supply. In today’s competitive environment, the service dynamism, elasticity, choices and flexibility offered by this scalable technology are too attractive that makes the cloud computing to steadily becoming an integral part of the enterprise computing environment. This paper presents a survey of the current state of Cloud Computing, This includes a discussion of the evolution process of cloud computing, characteristics of Cloud, Current technologies adopted in cloud computing, This paper also presents a Comparative study of cloud computing platforms (Amazon and Google), and its challenges.
Hide Abstract
PDF Downloads:
1007
[hupso]
Pages : 411-415
Maya Ram Atal*, Roohi Ali, Ram Kumar and Rajendra Kumar Malviya
View: Abstract |
PDF
| XML|
Search engines are in performance a major essential role in discovering information nowadays. Due to limitations of network bandwidth and hardware, search engines cannot obtain the entire information of the web and have to download the most essential pages first. In these paper, we propose a swarming ordering strategy, which have based on SiteRank, and compare it with several swarming ordering strategies. All the four strategies make an optimization for the naive swarming more or less. At the beginning of the swarming process, all the strategies can crawl the pages with high PageRank. When downloading 48% of the pages, the sum of PageRank is over 58% even for the worst one. At the later phase of swarming, the sum of PageRank varies slowly and reaches to unique finally. The objective of these strategies is to download the most essential pages early during the crawl. Experimental results indicate that SiteRank-based strategy can work Efficiently in discovering essential pages under the PageRank evaluation of page quality.
Hide Abstract
PDF Downloads:
908
[hupso]
Pages : 417-421
Syed Minhaj Ali¹,Roohi Ali² and Sana Iqbal³
View: Abstract |
PDF
| XML|
Many security experts would agree that, had it not been for the analysis of local-area networks, the investigation of the Turing machine might never have occurred. Given the current status of permutable modalities, analysts daringly desire the natural unification of DHCP and E-commerce. Our focus in our research is not on whether forward-error correction and DHCP are entirely incompatible, but rather on presenting a novel heuristic for the analysis of multi-processors (Opah).
Hide Abstract
PDF Downloads:
1022
[hupso]
Pages : 423-427
D. Suganthi
View: Abstract |
PDF
| XML|
The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. In this paper we introduced a novel method that focus on segmenting the brain MR Image that is important for neural diseases. Because of many noises embedded in the acquiring procedure, such as eddy currents, susceptibility artifacts, rigid body motion, and intensity inhomogeneity, segmenting the brain MR image is a difficult work. In this algorithm, we overcame the inhomogeneity shortage, by modifying the objective function with compensating its immediate neighborhood effect using Gaussian smooth method for decreasing the influence of the inhomogeneity and increasing the segmenting accuracy. With simulate image and the clinical MRI data, the experiments shown that our proposed algorithm is effective.
Hide Abstract
PDF Downloads:
963
[hupso]
Pages : 429-433
Md. Sadique Shaikh
View: Abstract |
PDF
| XML|
Human system Integration (HSI) creates an effective communication medium between a human (Business organization) and a computer (IT & Advanced electronic communication). HSI basically any MIS, AI, BIS or DSS with data pool like super servers , Data marts / warehouses , but why it’s need , and answer is that if software is difficult to understand and use , if it forces you into mistakes in decision- making process to run business with the support of IT & Business – HCI (Human computer Interface) often called GUI (Graphic User Interface) , but the concept is slightly different for HSI , though HSI is HCI (or GUI) but 100% under stable to end users to help in decision making and work mutually & sure simultaneously with computer not probably which almost found in poor HCI to run whole business with completely Human- computer co-ordination, and this Human- computer co-ordination only possible in those case where HSI is 100% accurate. Thus to change / modify or generate effective & useful HSI, Interface design must have to focuses on three major concern. For effective Integration in between human (i.e. organization) and system (i.e. computer) and are 1) The design of interfaces between software components 2) The design of interfaces between software and other non-human producers & consumers of information (i.e. other external entities), and 3) The design of interface between a human (i.e. users / business organization) and computer. In HCI /GUI consideration equal emphasis to all these three aspects, but HCI/GUI with the intention to develop effective HSI for business organizations for decision –making process, major concern is third one. This paper covered this emerging need of effective decision making with some models & discussion.
Hide Abstract
PDF Downloads:
970
[hupso]
Pages : 435-438
Shiv Kumar Gupta¹ and Ritu Vijay²
View: Abstract |
PDF
| XML|
Today large organizations are being served by different types of area processing and information systems. It is important to create an integrated repository of what these system contain and do it in order to use them collectively and effectively. The repository contains metadata of source systems effectively. The repository contains metadata of source systems data warehouse and also business data. Metadata, usually called data about data, is an important part. Metadata is supposed to be a helping hand to all co-workers in an organization that work directly or indirectly with the data warehouse. The main purpose of this paper is to determine through feedback from the business end user.
Hide Abstract
PDF Downloads:
956
[hupso]
Pages : 439-442
Md. Sadique Shaikh
View: Abstract |
PDF
| XML|
Designers often have ignored the human and organizational element, and concentrated on the technical implementation of the hardware/software mix (Hutchinson, 2000). Every next wave of technology brings its own expectations and surrounding hype; the field of decision support is no exception: on one hand, the need for reliable decision-making clues stays permanently; and, on the other hand, substantial development supply of decision support tools does not seem to play in exact tune with the above need.
Hide Abstract
PDF Downloads:
898
[hupso]
Pages : 443-446
Srinivas Rao Kanusu¹ and Ratnakumari Challa²
View: Abstract |
PDF
| XML|
An approach for extracting texture pattern as features for Palmprint verification is proposed in this paper. The features to classify the texture pattern of the Palmprints are calculated as an array of mean values of the pixels from the grided ROI . The Palm print image is processed through the various stages of the system to generate the feature vector which is useful to classify the texture for Palmprint verification.
Hide Abstract
PDF Downloads:
939
[hupso]
Pages : 447-450
Ankur Lal, Sipi Dubey and Bharat Pesswani
View: Abstract |
PDF
| XML|
When we rearrange the packet the most standard implementation of the TCP gives poor performance. In this paper loss of packets in TCP is detected using two diverse methods CPR (Constant Packet Re-arranging) and WCPR (Without Constant Packet Re-arranging). Constant packet rearranging does not depend or rely on the duplicate acknowledgement to detect the packet loss. Instead the timer is used to maintain how long packet is transmitted.
Hide Abstract
PDF Downloads:
953
[hupso]
Pages : 451-454
Keshav Niranjan
View: Abstract |
PDF
| XML|
The paper presents semantics based search paradigm to be embedded in Natural Language Interface (NLI) systems. The classical Information Retrieval (IR) models were based on lexical mapping and approximation based searches which suffered from obvious weaknesses as follows - 1. The queries used predefined lexical mapping or approximations and would skip any direct or indirect references via semantic alternatives. Homonymous lexemes can give many meanings leading to ambiguous queries and failed processes or ambiguous results if user is using the hyper word query. No intelligent mechanism is present in the NLI by which it will interpret the query. 2. When we write the query, then each lexeme gives only individual meaning of the word but lexemes are related to each other and produce a collocated meaning of the entire sentence. The classical IR model does not consider this aspect of IR. To get over with these inadequacies in the classical IR mode, the NLI has to be made smarter with adequate semantic capabilities. Therefore we will provide the inferential capability to the existing NLI by providing the knowledge base to the system. This knowledge base will consist of the facts, concepts, synonymy, homonymy, hypernymy, discourse, and the contextual information and will help in generating appropriate and accurate results.
Hide Abstract
PDF Downloads:
951
[hupso]
Pages : 459-462
C.S. Naga Manjula Rani
View: Abstract |
PDF
| XML|
Information retrieval (IR) technique stands today at a crossroads. With the enormous increase in recent years in the number of text databases available on-line, and the consequent need for better techniques to access this information, there has been a strong resurgence of interest in the research done in the area of information retrieval (IR). Originally an outgrowth of librarianship, it has expanded into fields such as office automation, genome databases, fingerprint identification, medical image management, knowledge finding in databases, and multimedia management. This paper deals with importance of IR and classical models of IR.
Hide Abstract
PDF Downloads:
1465
[hupso]
Pages : 455-458
Ashish Misra, Anand Kumar Dixit and Manish Jain
View: Abstract |
PDF
| XML|
In this paper we introduce The Enterprise Ontology for e-Business model which is a set of carefully defined concepts that are widely used for describing enterprise in general and can help companies to understand, communicate, measure, imitate and find out more about the different aspects of e-business in their firm. The Enterprise ontology model highlights the appropriate e-business issues and elements that firms have to think of, in turn to operate successfully in the age of Internet. The Enterprise Ontology contains four main pillars of a business model these are Product Innovation, Infrastructure Management, Customer Relationship, and Financials.
Hide Abstract
PDF Downloads:
960
[hupso]
Pages : 463-466
D.G. Krishnamohan
View: Abstract |
PDF
| XML|
The primary purpose of a computer network is to share resources. A computer network is referred to as client/server if (at least) one of the computers is used to "serve" other computers referred to as "clients". Besides the computers, other types of devices can be part of the network. In the early days of networking, there will be once central server that contains the data and all the clients can access this data through a Network Interface Card. Later on Client server architecture came into existence, where still burden is there on the server machine. To avoid the disadvantages in distributed computing was introduced which reduces the burden on the server by providing work sharing capabilities1 . This paper describes how the concept of distributed computing came into existence based on the advantages and disadvantages that raised in earlier networking concepts. The concept of distributed computing speaks that once data is available within the server(s), it should be able to be accessed and processed from any kind of client device like computer, mobile phone, PDA, etc.
Hide Abstract
PDF Downloads:
1577
[hupso]
Pages : 305-315
K.Srinivasa Rao1* and M.V.S.N. Maheswar2
View: Abstract |
PDF
| XML|
In a grid commuting environment, resources are autonomous, wide-area distributed, and they are usually not free. These unique characteristics make scheduling in a self-sustainable and market-lime grid highly challenging. The goal of our work is to build such a global computational grid that every participant has enough incentive to stay and play in it. There are two parties in the grid: resources consumers and resource providers. Thus the performance objective of scheduling is two-fold: for consumers, high successful execution rate of jobs, and for providers, fair allocation of benefits. We propose an incentive-based grid scheduling, which is composed of a P2P decentralized scheduling framework and incentive-based scheduling algorithms. We present an incentive-based scheduling scheme, which utilizes a peer-to-peer decentralized scheduling framework, a set of local heuristic algorithms, and three market instruments of job announcement, price, and competition degree. The results show that our approach outperforms other scheduling schemes in optimizing incentives for both consumers and providers, leading to highly successful job execution and fair profit allocation.
Hide Abstract
PDF Downloads:
1000
[hupso]