Abstract: This paper presents a status report on our efforts and experiences with heterogeneous clusters of workstation based on dynamic load balancing for parallel tree computation depth-first-search (DFS) project at the High Performance Computing laboratory in ShanghaiJiaotongUniversity. We describe an implementation of one parallel search algorithm DFS, running under the MPI message passing interface and Solaris operating system on heterogeneous workstation clusters. The main goal of this paper is to demonstrate the speed gained by the heterogeneous workstation clusters platform computing to solve large search problem, distribute the tree search space among the processors and show through a parallel simulation application, the maturity of the more recent technologies. We have run the parallelizm and distribution of the search space of the DFS algorithm among the processors successfully found that the critical issue in parallel DFS algorithm is the distribution of the search space among the processors. First experimental results of parallel DFS are given for tests that will serve as a starting point for further development of our project. We are presenting our preliminary progress here, and we expect in the near future to demonstrate a real dynamic load balancing for DFS algorithm running on heterogeneous clusters of workstation computing platform resulting in a good load balance among all the processors.
Keywords: heterogeneous clusters of workstation, parallel tree computation DFS, dynamic load balancing strategy, parallel performance.
Abstract: In this study, a new approach for the recognition of isolated handwritten Arabic characters is presented. The proposed method places a 5x5 grid on the character to extract the features needed for the recognition step. These features are calculated based on grid calculations. Then these features are feed to the decision tree to classify the character into one of the 28 classes. The classification process depends on the value assigned for each feature which lead to one leaf node in the decision tree that represent the Arabic character to be classified. Experimental results showed the robustness of the proposed approach for the recognition of Arabic handwritten isolated characters of a rate about 80.2%. The test was performed on 1120 different characters written be eight users, 40 examples for each of the 28 Arabic characters.
Keywords: Arabic Character Recognition, Feature Extraction, Machine Learning, Pattern Recognition.
Abstract: Current IT trends have caused huge changes to its infrastructure as compared to its past. Security along with all associated risks has brought great losses to organizations. This paper addresses not only management issues related with security risk assessment but with all those processes that should be initialization with technical aspects.
We can model the security relevant processes through out any information system. It can help us to develop security policy also allowing management to structure its policy outside the technological arena. This can be used as evolution tool even while analysis & designing of applications.
Despite the true realistic approach for senior management it also shows how to co-op up with security issues while considering resources with all its availability; also highlighting how evolving environments should be assessed with self assessment criteria.
Keywords: Security Risks, Security Process Management, Security Assessment, Security Plans, Security Model, Security Audit.
Abstract: This paper investigates and demonstrates the application of computer simulation for the determining the optimum design for laminations stacking workstation in a water pump assembly line in virtual reality environment. Ergonomic analysis, discrete-process simulation, and multi-response optimization approach were used concurrently to determine the optimum achievable design for a stacking workstation. In this context, "optimum" design entailed attainment of production quotas, avoidance of ergonomic deficiencies, economies of implementation and operational costs. The paper comprised attention to analysis of facilities, tooling system and ergonomic workplace design. More importantly, this can have a devastating impact on safety, quality, and cost.
The simulation model was constructed using five software applications. AutoCAD package was used for modeling the geometry of components. The four simulation tools were used to perform the ergonomic assessments for the number of alternative designs; also Design-Expert Software "DOE" for design of experiments to numerical optimization function finds maximum desirability of objectives simultaneously.
Keywords: discrete-process simulation, workstation, ergonomics, optimization.
Abstract: Nowadays, the grid environment presents new challenges, such as the dynamic availability of various resources that are geographically distributed, the quick and the efficient access to data, reduction of time of latency and fault tolerance. These grids are concentrated on the reduction of the execution time of the applications that require a great number of processing cycles by the computer. In such environment, these advantages aren't possible unless by the use of the replication. This later is considered as an important technique to reduce the cost of access to the data in grid. In this present paper, we present our contribution to a cost model whose objective is to reduce the cost of access to replicated data. These costs depend on many factors like the bandwidth, data size, network latency and the number of the read/ write operations.
Key words: Data Grid, replication, data placement, cost model, CERN.
Abstract: What is the relation between the process of adopting new technologies, and its impact on business value, in situations of high internal and external uncertainty? Whereas technology adoption is generally fairly well understood, the models do not seem to hold in situations of high uncertainty. In addition, the adoption of a new technology results from a sequence of individual decisions to adopt the new technologies, decisions being the result of the match between the uncertain benefits and costs linked with the adoption process. An understanding of the factors affecting this choice is therefore an essential step forward in order to study the adoption process of new technologies as well. The aim of this paper is to investigate the impact of this uncertainty, using a case study on the introduction of a new technology in a large Egyptian public bank. After exploring the most relevant uncertainty factors and their impact on the adoption process, the paper ends with a general discussion and conclusion.
Keywords: Technology Adoption; Knowledge Discovery in Database (KDD); Customer Relationship Management (CRM); Banking Sector.
ABSTRACT: Nowadays, the International Organization for Standardization (ISO) is working on the next generation of the software product quality standards which will be referred to as Software Product Quality Requirements and Evaluation (SQuaRE – ISO 25000 series). However, this series of standards will replace the current version of ISO 9126 International Standard which consists of inventories of proposed metrics to measure the quality of the internal, external, and in-use software product. For each of these metrics there is a cross-reference on where they could be applied (measured) during the ISO 12207 Software Life Cycle Processes and activities (SLCP). This paper provides a mapping between those two standards to highlights the weaknesses of these cross-references and proposes a number of suggestions to address them.
Keywords: Software Measurement, Software Quality Metrics, Software Life Cycle Processes (SLCP), ISO 9126, ISO 12207.
Abstract: This paper describes a formal framework for specification and analysis of object-oriented designs. The formal design notation models both the structural and behavioral views of the design. The analysis framework supports a Goal Expression Language (GEL) that allows the user to express his/her analysis goals for the specific design under consideration and the processor then analyzes the design according to these goals. Code generation is supported following successful analysis.
Keywords: object-oriented design, design specification, formal analysis, goal expression language.
This paper introduces a real-time model based on a true-concurrency semantics, expressing parallel behaviors and supporting at the same time timing constraints, explicit actions durations, structural and temporal non-atomicity of actions and urgency. This model is called Durational Action Timed Automata*. As an application, we propose translating rules from D-LOTOS language specifications to DATA*'s structures.
Keywords: Real-time systems, Actions duration,
Maximality-based semantics, DATA*'s.
Recommendation Systems (RS) have been widely used in many Internet activities and their importance is increasing due to the "Information Overload" problem arising from Internet. This paper describes the current usage domains of RS, giving a background, and some examples of systems used in every domain. In addition, it presents the different approaches for RS, giving a background and some examples of systems that use one of these approaches. Furthermore the paper discuss the ability of using RS in Learning Management Systems (LMS) to support students' needs and preferences, explore the LMS fields which may use RS, discuss the suitability of every RS approach to recommend Learning objects and finally state the suitable approach/es as well as designing a proposal structure of RS in LMS. This paper aims to highlight the importance of RS in the scope of eLearning and the ability to use it in LMS.
Keywords: Recommendation systems, learning management systems, course management system, eLearning platform.
Multi hop wireless infrastructures can be used as an extension to fixed infrastructures, where wireless nodes form a large network that provides access to the fixed network infrastructure, such as the internet, via multiple wireless hops. Wireless LANs and routing techniques, used in ad hoc networks, can be used in such a network. Here, we use the IEEE 802.11 standard, extended with power control to optimize the performance of such a network.
This work focuses on the Medium Access Control (MAC) mechanism, through Carrier Sense Multiple Accesses with Collision Avoidance (CSMA/CA). By limiting the transmission power to the level just sufficient for correct reception at the receiver, the network is able to have multiple transmissions performed simultaneously, and reduces the interference between transmitting nodes.
We found that the network performance is highly dependent on the traffic distribution types, the path loss model used, and the hierarchy of the wireless network. We proposed some modification on the wireless LAN standard to enhance the performance of the wireless multi hop networks through power control of transmissions. Validation of the proposed mechanisms has been done with the OPNET  modeler simulator, using different models for path loss, traffic distributions, and topologies. The results show higher performance when using power control for data and control packets, and higher throughput when assuming path loss models causing less interference. For the different distributions of nodes in the network, we noticed high throughputs in long chains and network grids where the distances between nodes are symmetric.
Keywords: IEEE 802.11, ad hoc, multi hop, path loss, interference, Wireless networks.
This paper presents a new (root-based) stemming algorithm for Arabic language. As other natural languages not all the words used in Arabic language has roots, some of these are borrowed from other languages, e.g. as the word "تلفزيون" television, so in this case the stemmer will fail to get the right root because these foreign words have no root. This algorithm is based on affix removal beside a knowledge from structural linguistics. The implementation and evaluation of this algorithm shows a noticeable improvement in the accuracy relative to previous algorithms.
Keywords: Arabic, Stemming, Root, negative suffix, negative prefix, Light Stemming, NLP.
Reinforcement Learning (RL) is a class of model-free learning control methods that can solve Markov Decision Process (MDP) problems. However, one difficulty for the application of RL control is its slow convergence, especially in MDPs with continuous state space. In this paper, a modified structure of RL is proposed to accelerate reinforcement learning control. This approach combines supervision technique with the standard Q-learning algorithm of reinforcement learning. The a priori information is provided to the RL learning agent by a direct integration of a human operator commands (a.k.a. human advices) or by an optimal LQ-controller, indicating preferred actions in some particular situations. It is shown that the convergence speed of the supervised RL agent is greatly improved compared to the conventional Q-Learning algorithm.
Simulation work and results on the cart-pole balancing problem and learning navigation tasks in unknown grid world with obstacles are given to illustrate the efficiency of the proposed method.
Keywords: Supervised Reinforcement Learning, Autonomous Agents, LQ-controller, Machine Learning.
This paper provides a new method to factorize the Module value of The Rabin's Cryptosystem. One of the method based on finding the value of p from the a specified table used in Rabin crypto-system.. The second method aims to factorize n ( where n= p-q ) by using a new algorithm on the definition of the generator.
Keywords: Public key cryptosystem, Rabin cryptosystem, attack, cryptanalysis, Cryptography, Factorization method.
Global Positioning System (GPS) and Strapdown Inertial Navigation System (SDINS) can be integrated together to provide a reliable navigation system. This paper offers a new method for error estimation in a GPS/INS augmented system based on Artificial Neural Network (ANN) and Wavelet Transform (WT).
An ANN was adopted in this paper to model the GPS/INS position and velocity errors in real time to predict the error in the integrated system and provide accurate navigation information for a moving vehicle.
It was found that the proposed technique reduces the standard deviation error in the position by about 91% for X, Y, and Z axes, while in velocity it was reduced by about 94% for North, East, and Down directions.
keyword: vehicular navigation, inertial navigation, GPS, wavelet multi-resolution analysis, neural networks.
Global positioning system (GPS) and inertial navigation system (INS) can be integrated together to provide a reliable navigation system. GPS provides position information and possibly velocity when there is direct line of sight to four or more satellites.
The integration between the GPS and INS leads to accurate navigation solution by overcoming each of their respective shortcomings. And to make this integration possible the difference between the GPS and INS systems in sampling rate must be solved before any integration can be work properly. Three methods were used in this paper (Newton, Spline, and artificial neural network (ANN)) to solve the mismatching in data rate between the GPS and INS systems.
Keywords: GPS; IMU; Sensor Integration; Integrated Navigation; Neural Network.
One of the key issues in the design of a distributed web server system (DWS) is determining the optimal number of replicas and their placement on the web servers. This paper presents a hybrid tabu search (HTS) algorithm for replica placement in a DWS environment. We model the object replication problem as a 0-1 optimization problem and specialize the tabu search into a specific algorithm for solving this problem by turning the abstract concepts of tabu search, such as initial solution, solution space, neighborhood, etc, into more concrete, problem specific and implementable definitions. In addition, we hybridize the tabu search algorithm with simulated annealing algorithm to speed up the convergence time of the algorithm without compromising the solution quality. Through a simulation study and comparison with well-known replica placement algorithms, we demonstrate the applicability and effectiveness of our hybrid algorithm.
Keywords: Tabu Search, Simulated Annealing,Object Replication, WWW, Distributed Web-Servers, Data Replication.
This study aims to review the literature in the area of group support systems (GSS) and conclude to a model that integrates the task appointed to a group, the culture that fits the individuals in the group, and the facilitation process of the GSS. The purpose of the study is to describe the relationships between a culture-task fit model that groups face in enabled GSS and its effect on the group performance. Literature indicated the importance of the task specifics that face the group and the structure of the group (Homogeneous vs. heterogeneous groups). The study concluded to four propositions that open doors for future research and proposed a method for testing those propositions. This paper included an introduction, followed by a review of the literature in the areas of task, culture, and GSS environment. The third section will try to describe the conceptual model followed by a section that describes the classification system used to define the focus of the study and the propositions of this study are stated. The forth section described the method proposed for testing and validating this work, and finally, the fifth section stated the conclusions and implications of research.
Keywords: Group support systems, information systems, culture, group work, experimental design.
Extracting and classifying proper names is a key to improving the efficiency and the performance of many applications in the area of natural language processing and text mining. Valuable information in the text is usually located around proper names. To collect this information we need to find the proper names first. By extracting proper names from the text we provide these applications with the proper names found in the text, their location, and some information about each. Proper names in Arabic do not start with capital letters as in many other languages so special treatment is needed to find them in a text. Little research has been conducted in this area; most efforts have been based on a number of heuristic rules used to find names in the text; some have used graphs to represent the words that might form a name and the relationships between them; and some have used statistical methods for this purpose. In this paper we present a new technique to extract names from text using a hybrid system based on both statistical methods and predefined rules. First we tag the proper name phrases in the text that may include names; second we use statistical methods to extract proper names from these candidate phrases; and third, we classify each proper name with respect to its major class and its subclass. We have developed a variety of rules and tested several different assumptions to accomplish the goals of this research.
Keywords: Arabic Language, Proper Nouns, Tagging, Classification, Rules, Statistical Methods.
An efficient association rules algorithm was divided in two sub-problems: frequent set mining from Databases and association rules generation. Frequent sets lie at the basis of many Data Mining algorithms. As a result, hundreds of algorithms have been proposed in order to solve the first problem: the frequent set mining problem. In this paper, we attempt to survey the most successful algorithms and techniques that try to solve this problem efficiently.
Keywords: Frequent Set Mining, Association Rule, Support, Apriori.
Knowledge is one of the organization's most important values that influencing its competitiveness. One way to capture organization's knowledge and make it available to all their members is through the use of knowledge management systems. In this paper I discussed the importance of knowledge management in software development and I presented an infrastructure to deal with knowledge management in software engineering environments (SEEs).
Knowledge is one of the organization's most valuable assets. In the context of software development, knowledge management can be used to capture knowledge and experience generated during the software process
This Research paper addresses a new way of thinking about the role of knowledge management in software engineering environments through developing a new extended hybrid framework that combines a five types of knowledge( user requirements knowledge, functional domain knowledge, technical knowledge, project status knowledge, and project experience knowledge) with five phases of software development ( planning, analysis, design, implementation , and maintenance & support) with five phases of knowledge management life cycle( capture, creation , codification, communication, and capitalization). This new framework I called "An Extended Knowledge Management Framework during Software Development Life Cycle".
This paper highlights on knowledge management in software environments, its challenges, opportunities, implementation, and its success factors.
Keywords: software development (SD), knowledge (K), knowledge management (KM), organizational memory (OM), requirements knowledge, domain knowledge, technical knowledge.
The decision-making aid is primarily concerned with the problem resolution which rests on a clear identification of the decision situations. It considers the decisional behavior as a point not necessarily guided by a single criterion but as a possible resultant of several criteria. Our study aims at improving the quality of decision brought to the decision-making process by proposing an expert system for piloting a dynamic, evolutionary and robust structure.
The interest of an expert system within the framework of production is underlined by the function use. The latter expresses the needs for decision-making aid and consumes expertise. The system carries out the preselection of a set of acceptable resources. This is in collaboration with the operator. The potential solutions of the arising problem are treated by a sorting procedure which allows to carry out a last selection to a better resource: This resource satisfies some criteria known as delay, cost and quality. The decisional modules, which are based on the expert system, are added to the multi - agent structure dedicated to piloting. The agent actions are concretized through the analysis and reaction procedures. This allows launching adequate decisions.
Keywords: Expertsystem (ES), Decision, multicriterion Assistance, Automated System of Production (ASP) , Multi Agent.
The evolution of the Internet is affecting many nations around the world and forcing changes to business and socio-economic development plans. It has major implications for the realization of the concept of globalization. The Arab world as developing countries with an economy in transition, has been investing in building its communications infrastructure and adopting the use of the Internet since 1995 as a vital tool for development. This paper will provide a deeper understanding of the key issues surrounding the Internet use in the Arab world with a focus on the challenges faced that relate to a number of social, technological, financial and legal issues. There are a number of suggested solutions and recommendations having to do with collaboration between the governments and the private sectors in each country and between the specialized institutions within the Arabian countries to diffuse the use of the Internet in the Arab world.
Keywords: Internet, Infrastructure, barriers, Digital Divide, Arab World.
The World Wide Web currently contains billions of documents; this causes difficulty in finding the desired information by users. Search engines help users in finding their desired information but search engines still return hundreds of irrelevant web pages that do not fulfill the user's query. Several search engines use clustering to group documents that are relevant to the user's query before returning them to the user, but there is no document clustering algorithm that has an accuracy that can prevent retrieving irrelevant documents. In this research, the researchers have introduced a new technique to enhance cluster quality by using user browsing time as an implicit measure of user feedback, rather than using explicit user feedback as in previous research and techniques. The major contributions of this work are: investigating user browsing time as an implicit measure of user feedback and proving its efficiency, enhancing cluster quality by using a new clustering technique that is based on user browsing time, and developing a system that tests the validity of the proposed technique.
Keywords: Web Mining, Data Mining, Implicit Feedback, Clustering, Filtering, Search Engine.
Despite the high volume of shopping done on the Internet each day, many consumers fail to make online purchases because of continued reluctance to engage in transactions with intermediaries that are not familiar and trusted. Existing research on consumer behavior on the Internet has focused on Internet purchasing ,  or on information searching through the Internet , . Some studies stressed on ease of use, other studies concentrated on usefulness as having strong effects on Internet usage. While the effects of ease of use and enjoyment are partly supported , and as the field expands, it has become clear that ease of use and usefulness cannot be the only predictive criterion for an individual's adoption of a microcomputer technological innovation. This study will examine the role of perceived risk on the user satisfaction and the decision to adopt, because it is more powerful in explaining online consumers' behavior than ease of use or usefulness.
Keywords: Diffusion of Innovation, Internet shopping, Perceived Risk.
Abstract: A relational database (RDB) schema is a description of database requirements in terms of a set of relations and a set of integrity constraints. An Entity-Relationship(ER) data model is a high-level conceptual data model that is used frequently for the conceptual design of databases. ER data models represent a concise description of users' data requirements without including implementation details. Because of that, ER data models are usually used to communicate with non-technical users since they are easier to understand. Some relational database designers used the concept of a universal relation and perform normalization to come up with the relational database schema, without developing an ER data model. We advocate that the best practice for a relational database design is to start with developing a conceptual schema like an ER data model and then map it to a relational database schema (as many CASE tools support). In this article, a case tool to perform the reverse process, which is generating an ER data model from a relational database schema, is presented. This tool is very useful in obtaining a conceptual schema from a relational database schema. This tool can also be thought of as a kind of reverse engineering case tool that aids in the reverse-engineering of legacy databases to consider new implementation technology options.
Keywords: Conceptual schema, ER models, automated software engineering, case tools.
It is information security that ensures on going confidentiality, integrity and availability of information. Information security risk management assures correct protection of assets from the relevant range of risks through the information security risk management process which typically begins with a risk assessment. Due to the recognition of the importance of information security risk management, formal standards and guidelines have been released that detail a base process. However, there is no "widely accepted" optimum risk assessment methodology for Small and Medium Enterprises. This paper talks about the available Risk Assessment methodologies along with the specific constraints of SMEs. A risk assessment methodology for Small and Medium Enterprises (SMEs) has been proposed. The proposed risk assessment methodology helps quantify the security gaps between the assessed and the desired assurance levels. Relative Risk Benchmarking (RRB) proposed in this paper is an open and transparent benchmark to measure relative risks faced by any organization. Today risks faced by an SME; are diverse and varied; and are interrelated with each other as well as to the overall risk (and thereby security posture). A risk-management framework based on RRB would bring out the relative importance of different elements of Information Security to the business with respect to the overall status of the information security of an enterprise. Output of the RA process based on RRB shall provide necessary guidance for enterprise security managers for allocation of resources.
Keywords:Information Security, SME, Risk, Security, Enterprise Risk, Risk Analysis, Risk Management.
This paper describes a proposal for a system for XML data Integration and Querying via Mediation (XIQM). An XML mediation layer is introduced as a main component of XIQM. It is used as a tool for querying heterogeneous XML data sources associated with XML schemas of diverse formats. Such a tool manages two important tasks: mappings among XML schemas and XML data querying. The former is performed through a semi-automatic process that generates local and global paths. An XML Query Translator for the latter task is developed to translate a global user query into local queries using the mappings that are defined in an XML document.
KEYWORDS: Data integration, mediation, XML Schema, XML Query languages.
We propose in this paper a new dynamic replication algorithm called BestCluster. The BestCluster replication algorithm takes a replication decision according to the total number of requests initiated by a cluster of users rather than taking replication decision depending on requests initiated by a single user. The objective is to benefit a group of users rather that only one user. A cluster associated with a user node is composed by the user node and its neighbours. Two users' nodes in a network are neighbours if there exist a physical link between them. The implementation and evaluation of the BestCluster replication algorithm have been performed using Optorsim, a data grid simulator. The preliminary experimentations results have shown that BestCluster could be a good replication approach especially in wide area networks such as data grids.
Keywords: Data grids, Content Distribution Networks (CDN), World Wide Web, Replication, Facility Location Problem, k-median Problem.
Crossing is an important and delicate operation of the Genetic Algorithm (GAs) : the most used techniques are 1-point and MPX which are based on the notion of traditional crossing in Genetics. In this paper, we show their limitations and we introduce a new one (called "Bestof2") inspired from modern genetics, which is able to generate the best adapted solutions in a better way and to preserve them during the search for the optimal solutio, in order to converge quickly. We choose to apply this approach to one of the cryptography algorithms based on GAs. Then, the results obtained by simulation prove the efficiency of this approach..
Key-words: Genetic Algorithm, crossing 1-point and MPX, bestof2, cryptography.
In this paper we propose an offline electronic check payment protocol, which offers payer anonymity over payee. In our protocol, we adopt the scenario of traditional check payment system: We follow the general steps series of the check payment process, satisfying all its requirements/aspects, or at least the security and functionality goals behind them, with a careful consideration to the characteristics of electronic check (eCheck), as well the anonymity of the payer. This true adaptation allows keeping up the advantages of traditional check system besides the new features offered by its electronic counterpart. In our protocol, payee will have the ability to verify the correctness and primary-validity of an eCheck, and will be provided with guarantees in order to trust and thus accept the payment, without affecting payer's anonymity. A correct eCheck is considered as a guarantee for a later deposit of the enclosed amount of money. In order to encourage payees to trust and accept such system, we offer different verification and security aspects which lead to a trusted and high-assurance eCheck payment with respect to payer anonymity. The proposed protocol will provide users with additional alternatives for anonymous electronic payments, nevertheless allowing a wider usage of eCheck.
Keywords:e-commerce, anonymity, security protocols.
Traditional instruction in developmental Math course was compared to computer-facilitated instruction using Prentice-Hall's Interactive Math computer program. Two groups of students were used, a control group receiving traditional instruction, and an experimental group using the interactive math software. Toward the end of the semester, qualitative interview data were collected from a subset of subjects and from the course instructor to clarify instructional methods and procedures and provide insight into quantitative findings. Both groups realized improvements in mathematics achievement following instruction. However, while the attitude of the control group toward math learning remained the same, that of the experimental group was greatly improved over the course of instruction.
Analysis of the qualitative data within the context of educational theory revealed that the computer-facilitated instruction was not implemented in a manner designed to make the best use of this instructional modality. The results concerning mathematics achievement cannot be said to denote the superiority of traditional instruction. These findings highlight the importance of collecting qualitative data in otherwise quantitative studies assessing computer-facilitated instruction. What was revealed, however, was a significant improvement in students' attitudes towards learning math. Those using the interactive math program experienced far less math anxiety. Their stress level was greatly reduced and their confidence enhanced.
Keywords: Qualitative Data, Computer-facilitated instruction, Attitude toward Math, Interactive Math.
Web is a large and growing collection of texts. This amount of text is becoming a valuable resource of information and knowledge. To find useful information in this source is not an easy and fast task. People, however, want to extract useful information from this largest data repository.
Researchers Search Engine on the WEB (RSEWEB) is a framework for automatic collection and processing of resource related to researchers' information in the World Wide Web. The current RSEWEB implementation searches, retrieves and extracts information about researchers from many servers in the Web and combines them into a single searchable database.
This paper discusses the background and objectives of RSEWEB and gives an overview of its functionality and implementation of RSEWEB system used to construct specialized database about researchers.
The intention is to develop the system to integrate it with other applications such as ThesWB for Advanced Document Management. The system can be utilized in the process of automating conference organization and its usage in real world applications.
Keywords: Information Extraction, Knowledge disscovery, Web Mining, document Managment.
We are interested in this work, by the optimized static allocation multi criteria in a real time distributed system when the tasks are subject to precedence's constraints. Within this framework it is necessary to suppose before execution, that all the possible scenarios of executions satisfy the temporal constraints while minimizing the cost and the size of material architecture, as well as the use at best of its resources. In this problem of resource allocation (placement and scheduling), which is NP-Complete, the satisfaction of several criteria can be contradictory. For the resolution of this problem, we propose in this work, a generic multi agent system, which is a dynamic component, coupled with a static strategy of scheduling in order to particularly integrating the criterion of load balancing. Thus, the need for a dynamic model appeared to us with the consideration of the heuristic based on list scheduling , . An experimental Analysis was realized under programming parallel environment PVM (Parallel Virtual Machine)}, and shows the interest of our method. This for any heuristics using the dates of execution of the tasks (operations) in particular for method AAA (Algorithm, Architecture, Adequacy) developed with the INRIA and which was the subject of several extensions.
Keywords: static placement/scheduling, load balancing, system distributed real time, multi agent systems.
Many works proposed decision support system for the regulation of an urban transportation system. But, to our knowledge, it does not exist systems which propose a real interaction between the regulator and the decision support system. Indeed, the majority of its systems tend to automate procedures of regulation, which go in opposition even to the definition of an interactive system of decision support system.
We propose in this article the identification of the decision-making process of the regulator in its Centralized Unit Control Unit Centralized (CUC) in order to offer to him, in the phase of design, the possibility to construct the solution of regulation by carrying out the choice between various strategies in the aim to ensure a complete interaction between him and the Decision support system for regulation.
Key words – Urban transportation system (UTS), Decision Support System for Regulation (DSSR), algorithms of regulation, cognitive engineering, Automatic Monitoring vehicle (AVM).
A driving force in healthcare is to offer the proper treatment with minimum risk and the fastest recovery. Biomedical equipment and devices play a pivotal role in the continuum of patient care. This paper describes the benefits of the Medical Equipment Performance Record (MEPR) and the technologies that have facilitated the development of a prototype. We consider MEPR a very important and significant example of a life-spanning system.
The paper starts with a brief description of the government efforts in developing e-health services in Bahrain and its challenges. This is followed by a description of MEPR, its objectives, architecture and its challenges. The focus of the paper is then directed towards the development of a prototype system and the presentation of some screen shots of the system.
Keywords: e-health, healthcare, medical equipment performance record.
Web services (WS) and Services Oriented Architecture (SOA) are actually in exponential evolution. They allow systems to communicate with each other using standard Internet technologies. Systems that have to communicate with other systems use communication protocols and the data formats that both systems understand. Such Web Services interest has coincided with the proliferation of XML, Java technology and Business-to-Business (B2B) commerce.
The key attraction of Web Services comes from the business drivers of cost reduction and B2B integration. The B2B standards try to formalise the Business Collaboration (BC) based on documents exchange between partners. This collaboration is defined in some standards in a machine-readable format and in some others it is not. ebXML and RosettaNet specify such a collaboration. This paper compares business processes based on ebXML and RosettaNet with Web service technology. Then, it shows how these standards can be integrated in the same B2B developed architecture.
The ethics of the adoption of Information Technology by Muslims must be researched in order to increase prosperity and to stop many unethical practices such as software piracy. Ethics amongst IT professionals and the general public such as the sale of software are but a few issues which need to be researched. This research examines the computer ethical principles presented in the ACM Code of Conduct from an Islamic point of view through studying the relevant verses of the Holy Quran and Prophet Mohammed (pbuh) Hadiths. We used the ACM code of conduct as a base to develop a Computer Islamic Code of Ethics to be used by IT personnel and institutions.
Keywords: Ethics, IT Ethics, Islamic Ethics, Islamic computer Ethics.
This paper describes the combination of Shamir secret sharing method with the trap door characteristics of discrete logarithm mod a large prime number as employed in the Diffie-Hellman exchange scheme. The purpose of this combination is to protect the privacy of the numbers held by the parties of the shared secret, while at the same time authorizing them to broadcast data in the clear which when published by adequate of the parties enables authorization of the transaction for instance out of signers for the cheque.
Key words: Public key encryption, Shared Secret, Discrete logarithm, Diffie-Hellman Scheme.
A proposed signature verification algorithm has been designed and implemented. The algorithm is based on multiwavelet transformation and neural network. Two-level decomposition is used to extract the main features of training image and twenty eight images are produced. Four 'Low-Low" sub-images are eliminated and energy is calculated for only twenty four images. The probabilistic neural network is adopted to verify the signature. The system is implemented under MATLAB environment.
Keywords:Verification, Multiwavelet, Probabilistic Neural Network, Normalized energy, Image recognition.
An Arabic phonemes recognizer has been designed and implemented. The proposed recognizer system consists of two parts, Reference and Test. The reference part is the main features of the Arabic phonemes. Each phoneme is represented by a single vector for each one. The features are extracted using Linear Predictive Coding and Vector Quantization. The test part is the utterance word. The recognition is achieved frame by frame of the utterance word. The recognizer system is implemented under MATLAB environment.
Key words: Arabic phoneme recognition, Vector quantization, Linear predictive coding.
One of the primary advantages of replicating data to applicable sites in distributed database systems consists in improving data availability. Several techniques have been proposed for managing replicated data in distributed and grid database systems. Diagonal Replication on Grid (DRG) is one method for efficient data replication where the data is replicated synchronously in a diagonal manner of a logical grid structure. No solution so far addresses the issue of security during replication. Data files can be replicated to nodes where several malicious attacks might be initiated against the data. This paper proposes the integration of the notions of trust and security of a node into the Diagonal Replication technique. Data is replicated to the diagonal sites ranging from mostly trustful nodes to least trusted ones. The possibilities of malicious actions for a certain file is minimized and therefore guarantying a more secure replication of databases on the grid.
Keywords: Database Security, Replication, Grid System.
Numerical Weather Prediction (NWP) Models are considered to be the backbone for national meteorology services. They can predict the future state of many weather parameters such as air temperature, wind speed, wind direction and cloudiness. There is a need to evaluate the accuracy of NWP models in order to find out systematic errors that the model may have and tune them subsequently. This evaluation is based on comparing the output of NWP models with the actual weather observations. This paper presents the design and implementation of a portable station value verification package, which is widely used around the world. In addition, the paper also presents some algorithms that were used in the package, such as the wind direction averaging algorithm.
Keywords: Numerical Weather Prediction, model verification, database, statistics scores.
Maintenance of integrity is an important issue in parallel database systems. Testing the validity of a large number of constraints against large databases and after each update operation is an expensive process. Many different methods for integrity constraint maintenance have been proposed by researchers. The main focus of the methods in the literature is the simplification process. The simplified forms of the Integrity Constraints (Cs) are called the integrity tests. The integrity constraints defined on the database must be semantically consistent and there is no contradiction among them. Adding a new constraint(s) may contradict with other existing integrity constraints in the database. This article concentrates on how to verify if a new integrity constraint contradicts with the existing constraint set defined on the parallel databases. Another interest of this article is the way of the system checks whether or not the constraints have been invalidated by a transaction and repair the integrity violations if there are violations.
Key-Words: Constraint Maintenance, Transaction Verification, Constraint Simplification, Integrity Enforcement, Semantic Integrity Systems, Constraint Verification.
The REACH initiative is among the most important plans which puts forward recommendations and a time plan with goals to further push forward ICT in Jordan. This paper presents two low cost and practical activities that can easily help implement the recommendations put forward by the REACH initiative. The paper argues that Jordan's competitiveness and international profile in ICT can greatly be enhanced through making Jordan the main production hub for web content in the Arabic language, the paper presents arguments for adopting this activity and discusses the obstacles against its adoption with the conclusion that it is very viable. Also, the paper calls for the creation of an ICT market intelligence unit that serves all stakeholders in the ICT industry in Jordan. This unit will insure that ICT stakeholders whether governmental, academic or industrial are very well informed of the status of ICT locally, regionally and internationally and can provide important data on current and future trends in the industry. The paper argues the merits and reasons behind establishing such a unit and concludes that it will greatly help the future of ICT in the country. Suggestions for a structure of such a unit are also provided.
Keywords: ICT and society, REACH initiative, Arabic web content, Jordan, ICT industry.
تعتبر المعرفة بمثابة العصب الحيوي لجميع اعمال المنظمات المعاصرة، وبالاستفادة من التطورات الحاصلة في تكنلوجيا المعلومات ونظم الذكاء الاصطناعي- النظم الخبيرة – يقدم هذا البحث محاولةجديدة – على حد علم الباحثة - لتصميم انموذج "لنظام هندسة المعرفة "يهتم بكيفية استخلاص المنظمات للمعرفة، وخاصة الضمنية منها، والعمل على هيكلتها وتنظيمها وخزنها في قاعدة للمعرفة يمكن الرجوع اليها عندالحاجة اليها . تحكم النظام المقترح مجموعة من الافتراضات ،ويعمل عن طريق مجموعة من المكونات صنفت الى المكونات المادية والمكونات البشرية ..اضافة الى العمليات المشغلة للنظام ، وترجمت مكونات النظام وعملياته الى مخطط افتراضي يوضح العلاقات بين هذه المتغيرات. تم صياغة فرضيتين للبحث تتعلق الاولى بوجود علاقات ارتباط معنوية ما بين الابعاد المقترحة لنظام هندسة المعرفة، وتتعلق الثانية باهمية تضمين النظام لكافة المكونات والعمليات المقترحة ، ولاختبار هاتين الفرضييتين تم تطوير استبانة ،وزعت على عينة عشوائية لمجموعة من الاساتذة في الاقسام العلمية ذات العلاقة بالحاسبات في الجامعات العراقية في محافظة بغداد ،واستخدم معامل الارتباط والتحليل العاملي لاثبات الفرضيتين السابقتين ،اما الاستنتاجات التي توصل اليها البحث هي: وجود مستويات معرفية (معلومات ، ممارسة ،خبرة) لافراد العينة بابعاد النظام يعكس وجود الامكانية لتطبيقة.،ترتبط ابعاد نظام هندسة المعرفة بعلاقات ارتباط معنوية ذات اتجاه طردي ،اضافة الى فبول كافة المتغيرات المبحوثة ضمن ابعاد النظام المصمم ،وتشكل كافة المتغيرات اهمية تكفي لضمها الى ابعاد النظام. انجز البحث في اربعة محاور هي المقدمة والتي جاء في حيثياتها مشكلة البحث ،وفرضياته،واهميته واهدافه ...واهتم المحور الثاني بصياغة أطار مفاهيمي للانموذج المقترح تصميمه لنظام هندسة المعرفة ،وخصص الثالث للجانب العملي والذي تضمن خلاصة وصفية لاستجابة عينة البحث ،وعرض لنتائج علاقات الارتباط بين المتغيرات المبحوثة،وباستخدام اسلوب التحليل العاملي امكن استخلاص العوامل التي يمكن تضمينها للنظام ومتدرجة حسب اهميتها...واخيرا توصلنا الى مجموعة الاستنتاجات والتوصيات التي تدعم هذا النظام.
.الكلمات المفتاحية :الاستحواذ على المعرفة ،مهندس المعرفة،قاعدة المعرفة
Website information Architecture (IA) is an emerging discipline which focus on the principles of design and architecture in the digital landscape. However, the discipline is broadly being addressed by IA community of practices known as information architects. Many design principles and evaluation criteria that had been proposed for the development websites IA, generally lack theoretical background and justification. Several suggested measures in the discipline were based on existing practices with no explicit construct. This paper attempts to give a formal treatment of website IA through theoretical grounding from the architectural domain. The framework of architectural building understanding is inspired by the fundamental ideas of Vitruvius Theory developed more than 2000 years ago used to validate the construction of a building. Drawing this architectural theory and upon related dimension of IA studies, we argue that the theoretical grounding from architectural domain may be further specified to frame out the loosely thread of IA website design methodology and development.
Keywords: Website Information Architecture, Architectural, Website Design.
There are different crossover techniques for genetic algorithms. This study aims to give an experimental comparison between different crossover techniques. The crossover techniques are categorised according to the representation used. Also the paper thoroughly shows how each operator works by giving complete examples. The comparison includes crossover techniques for binary, value and permutation based representations.. The different techniques were used to solve a number of NP problems such as Knapsack, system of linear equations and Schubert function. Furthermore, the results were analysed using Jonkheere-Terpstra test. The Heuristic crossover was found to be the best crossover technique to be used for many problems.
Keywords: Artificial Intelligence, Genetic Algorithms, Crossover Techniques, Jonckheer-Terpstra test.
DNA oligonucleotides (words) are used for computations as well as other nanoscale applications. These words should hybridize as designed in order to provide correct results. Unfortunately, this is not always the case, and therefore, hybridizations in DNA spaces (all possible DNA words of a given length) were studied to provide reliable DNA-based applications.
In this paper, Hybridizations were modeled as a Gamma distribution, which is a common distribution in reliability theory. Using this model, the consequences of hybridization errors on the reliability of DNA-based applications can be analyzed and understood. Eventually, this should help to select better DNA words and more reliable applications. Words with more A and T bases than G and C bases have fewer cross-hybridizations, and therefore, might be more reliable in applications.
Finally, based on the Gamma distribution, characteristics of DNA spaces are estimated. These characteristics include the mean number of occurrences of cross-hybridizations and the ratio of highly connected words in a DNA space.
Keywords: DNA Computing, Gamma distribution, modelingز
Student performance in university courses is of great concern to the higher education managements where several factors may affect the performance. This paper is an attempt to use the data mining processes, particularly classification, to help in enhancing the quality of the higher educational system by evaluating student data to study the main attributes that may affect the student performance in courses. For this purpose, the CRISP framework for data mining is used for mining student related academic data. The classification rule generation process is based on the decision tree as a classification method where the generated rules are studied and evaluated. A system that facilitates the use of the generated rules is built which allows students to predict the final grade in a course under study.
Key Words: Data Mining, Classification, Decision Trees, Student Data, Higher Education.
A single Digital Library (DL) contains enormous research knowledge which can be used for several application domains e.g. e-learning, e-research etc. Due to enormously increasing digital contents and number of facilitating services, DLs face challenges of seeking wide-scale deployment solutions in Technological Spaces (TS). Such solutions necessitate: (i) understanding the process models of DLs at both macro and micro levels, and (ii) identifying the suitable candidates from TS for their successful deployment. This paper is an effort to cover the first part of the above challenges and captures the socio-technical aspects in DL processes by modelling its processes both in Riva and Role Activity Diagrams (RAD). A first-cut Riva based architecture of DLs provides a macro view of inter-communicating and evolving complex processes. This has been further elaborated to develop a micro view by using RADs applied to the Scientific Publishing Process, as an example. This macro-micro process modelling combination helps to understand, identify and reduce the technical implications that may arise at later stages of DL system development, deployment and evolution. Finally, this paper is a step forward towards identifying DL processes and the role of Riva and RADs towards their enactment.
Keywords: Digital Library Processes, Scientific Publishing Process,Business Process Architecture, Process Modelling, Riva, RADs.
Partial order approaches seek to solve the state space combinatorial explosion problem by tackling one of its causes namely the parallelism representation by interleaving execution of actions. This paper proposes the joint use of the covering steps and the maximality semantics, as a partial order approach for the resolution of this problem..
Keywords: Formal verification, Partial order semantic, Maximality-based semantic, Covering step graph.
The aspects of quality are that it is something unquantifiable trait- it can be discussed, felt and judged, but can not be weighted or measured. To validate software systems early in the development lifecycle is becoming crucial. Early validation of functional requirements is supported by well known approaches, while the validation of non-functional requirements, such as complexity or reliability, is not. Early assessment of non-functional requirements can be facilitated by automated transformation of software models into (mathematical) notations suitable for validation. These types of validation approaches are usually as -transparent to the developers as possible. The widely acceptance of quality services will only be accepted by users if their quality is of the most acceptable level. UML is rapidly becoming a standard (both in development and in research environments) for software development. The work here in this paper is extension of Quality with UML (QWUML, IDIMT-2004, and SEN-2005), quality of the system measurements with modeling (UML). This paper discusses some important issues regarding system design modeling in association with quality, complexity, and design aspects using UML heuristics.
Keywords: UML Unified Modeling Language, QWUML Quality with UML, DB Database, KAS Number of key attributes, DRC Depth of Relationship between classes, IRA Inter-relational attributes, IRM and Inter-relational methods.
This study was based on a major assumption that the lexical structure of Arabic textual words involves semantic content that could be used to determine the class of a given word and its functional features within a given text. Hence, the purpose of the study was to explore the extent at which we can rely on word structure to determine word class without the need for using language glossaries and word lists or using the textual context. The results indicate that the morphological structure of Arabic textual word was helpful in achieving a rate of success approaching 79% of the total number of words in the sample used in the study. In certain cases, the approach adopted in the investigation was not adequate for class tagging due to two major reasons, the first of which was the absence of prefixes and suffixes and the second was the incapability of distinguishing affixes from original letters. It was concluded that the approach adopted in this study should be supplemented by using other techniques adopted in other studies, particularly the textual context.
Keywords Arabic Language Processing, Word Class Tagging, Part-Of-Speech Tagging, Morphological Analysis.
This paper describes a novel performance evaluation technique of a Multi-rate combined Code Division Multiple Access (CDMA) and Space Division Multiple Access (SDMA) multiuser receiver. Single sector in a cell adopts combined CDMA and SDMA system, and accommodates an arbitrary K number of users transmitting their data using different rates, according to a predetermined minimum required Quality of Service (QOS) is considered. The Bit Error Rate (BER) expressions of the Uij user (j-th user in the i-th class rate (media)) have been derived in exact form. Parallel Interference Canceller (PIC) is adopted as a suboptimum multiuser detector. The system performance of different number of antenna elements is investigated. Moreover, comparison between Combined CDMA and SDMA with and without PIC canceller is also presented. Finally, the system performance for both pure CDMA and combined CDMA and SDMA systems is studied. The obtained results show that the BER is improved with the increasing number of antenna elements. Also, the system capacity of a combined CDMA and SDMA system is improved compared to the pure CDMA system. Finally, the system capacity of a combined CDMA and SDMA system with PIC is noticeably improved compared to the one without PIC (conventional multiuser receiver).
Keywords: Multi-Rate, Wireless CDMA, SDMA, Multiuser Detection (MUD), parallel Interference.