Muzhir Shaban Al-Ani1 and Khattab Alheeti
Abstract: Vehicles growth leads to a big problem over the world including in crowded cities. The Intelligent traffic control algorithm is implemented to introduce many parameters, such as the crowded roads, the emergency vehicles and the intersection of roads. Intelligent cameras are connected for capturing real-time traffic flow images of each direction. The control system can automatically adjust the traffic light control parameters according to the changes of traffic flow in different directions, thereby increasing the traffic efficiency of intersection of roads and achieving a best control for traffic. This work needs a
study of traffic control over the city that will be implemented.
Mohamed Elammari, Reem Elsaeti
Abstract: Agent-oriented software engineering is a new technology made available to software engineering science, which received a great of attention in the last decade and has since become one of the most active areas of research and development activities in the computing field. Despite different agent methodologies, languages, architectures and successfully developed agent-based applications, agent-oriented software engineering remains at an early stage of evolution. Thus, not only are new methodologies urgently required, new evaluation techniques are also mandatory. It is not easy to specify a particular methodology for the development of any system. Furthermore, there are no methods for determining what the advantages and
drawbacks of each methodology are. In this paper, authors, compare the four most commonly used MAS meta-models (ROADMAP, HILM, Styx, SONIA) in order to identify the main components that can be used to specify a single, generic MAS meta-model.
Integrated Controller for Grid Connected Wind Turbine, Based On Neural Networks
Alaa Hashad, Fathy Z. Amer, Ahmed M. El-Garhy, Ahmed E. Youssef, Sabry M. Aly
Abstract: Electrical power production is the main target required from wind turbines. This paper describes an approach for wind turbine controller as a vital part of the turbine, this controller is based on Artificial Neural Network technique “ANN”, where a control scheme has been applied and validated by detailed simulation in MATLAB 6.5/Simulink. The proposed controller enhances the reactive power level which is affected by electrical grid voltage and/or load disturbances where the controlled variables for the controller are system voltage and power production. Final results were compared with practical database of wind turbine runs by conventional controllers without applying ANN, and found positive. Furthermore this research deals with a grid connected wind turbine. A typical Self- Exited Induction Generator is used as a case study. The intermediate outputs of the generator different modules are presented. The model will be useful for wind energy developers for designing wind energy conversion systems at the time of planning wind power stations. ANN based controller is much faster and adaptive to maintain maximum power conversion efficiency which appears steady at maximum in the same area of the power/wind speed curve during load or wind sudden variances.Keywords: Induction Generator, Simulation, Voltage Stability, Multilayer.
Using Wikis to Develop Writing Performance among Prospective English as Foreign Language Teachers
Manal Mohammed Khodary Mohammed
Abstract: This study aimed at investigating the effect of using wikis to develop prospective English as a Foreign Language (EFL) teachers' writing performance. The participants were fourth year prospective EFL teachers at Suez Faculty of Education in Egypt. Thirty prospective EFL teachers participated in each of the experimental and control groups. Both groups were pre-tested by using the Writing Performance Test (WPT) for equivalence in their writing performance. The experiment was conducted at the beginning of the first term of the academic year 2009-2010. The experimental group and the control group were post-tested by using the WPT. Differences between the mean scores of the pre- and post-WPT were calculated by using the t-test. The results showed that statistically significant differences were found between the mean scores of the experimental group and the control group on the post-WPT in favor of the experimental group. The results also revealed that there were statistically significant differences in the mean scores of the experimental group between the pre- and post-WPT in favor of the post-WPT. These results revealed the effectiveness of using wikis in developing prospective EFL teachers' writing performance. It is recommended that formal training of EFL writing instructors should introduce programs that based on using wikis in writing classrooms to develop their students' writing performance. Suggestions for further research include investigating the effect of using wikis on developing prospective EFL teachers' collaboration and reflection.Keywords: wikis, writing performance.
Texture Recognition Based on DCT and Curvelet Transform
Salah Sleibi Al-Rawi, Ahmed Tariq Sadiq, Ismail Taha Ahmed
Abstract: This paper presents a proposed technique for texture recognition which depends on the combination of Discrete Cosine Transform (DCT) with Fast Discrete Curvelet Transform (FDCvT) via Wrapping.The proposed technique includes two stages, the first stage is implemented by taking individual natural textures (wood, stone and grass) with several positions and calculation of the features vector (Mean and standard deviation) by using many methods: DCT, FDCvT via Wrapping, and both FDCvT via Wrapping and DCT. The second stage is implemented by taking several samples of new textures for testing the work.The results show that the texture recognition rate by the DCT is 52%, and the FDCvT via Wrapping is 88%. But the new technique of (FDCvT via Wrapping and DCT) achieves better recognition rate (92%). This combination leads to efficiency in texture recognition because the DCT added some qualities that strengthen the work of the Curvelet Transform.Keywords: Texture Recognition, DCT, Curvelet Transform.
A Survey & Qualitative Analysis of Interactive Pedagogical Drama Systems
Samiullah Paracha1, Sania Jehanzeb, and Osamu Yoshie
Abstract: Recent years have seen a growing interest in constructing rich interactive learning and training experiences. As these experiences have grown in complexity, there has been a corresponding growing need for the development of robust technologies to shape and modify them in reaction to learners’ interventions. The most common way of achieving this functionality is by adding a centralized experience manager to monitor and moderate story tension. An experience manager is an intelligent computer agent that manipulates a virtual world to coerce a learner’s experience to conform to a set of provided properties. It directs the roles or responses of objects and agents towards a specific educational or training goal. In this paper, we provide a survey of recent advances in experience management technologies for educational and training purposes, and describe a set of desiderata for the qualitative analysis of such systems. This will serve a dual purpose: to provide a reference for researchers in interactive pedagogical drama community to understand useful points of departure for extending the state of the art; and to enable domain experts, rather than technical experts, to efficiently author complex and engaging scenarios in virtual learning environments.
Interactions in Control of Large Scale Systems
Younes Alfitorey Mousa
Abstract: The paper is aimed on research of large scale systems treated as dynamic multi-parametric systems. The work deals with theoretical grounds regarding large systems and eduction of basic mathematical formulas. This lies the basis for simulation of such systems and design of their controllers. The principles of large scale systems control are implemented and tested on an electricity network system. The work deals with creating a simulation model of an electricity network with 15 interacting areas of control. The system and designed controllers are simulated in matlab simulink environment. The work explores the influence of interactions of controllers in all control areas and different basic controllers on effectivity of the electricity network system as a whole complex.
A Mobile Application for Monitoring Inefficient and Unsafe Driving Behaviour
Adnan K. Shaout and Adam E. Bodenmiller
Abstract: Many automobile drivers are aware of the driving behaviours and habits that can lead to inefficient and unsafe driving. However, it is often the case that these same drivers unknowingly exhibit these inefficient and unsafe driving behaviours in their everyday driving activity. This paper proposes a practical and economical way to capture, measure, and alert drives of inefficient and unsafe driving. The proposed solution consists of a mobile application, running on a modern smartphone device, paired with a compatible OBD-II (On-board diagnostics II) reader.
Complex System Model based on Multi-Agent Systems and Petri Nets
Karima Belmabrouk, Fatima Bendella, and Samira Benkhedda
Abstract: This paper discusses the integration of the multi-agent paradigm and of Petri-Nets of high-level modeling and design of a compact disk production system (PS). We try first to describe the bases of design methodology using Petri nets of these systems which are by nature complex and distributed. For this, we will introduce a simplified example of a production process. And we will try, then, to define a model of transformation of the corresponding Petri-Net (PN) to a multi-agents system (MAS), passing through an intermediate phase, allowing obviously generating automatically the source code related to the obtained MAS. The objective of our approach is to take advantage of the power of decomposition of multi-agent systems problems and their ability to represent complex systems, on the one hand, and the ease of modular representation of the complex systems offered by Petri nets the other hand. What will offer us a conceptual modeling clear and realistic of the different agents, formalization and orderly and formal analysis of the global model obtained.
E-learning systems and the requirements of the educational environment
Rajaa Al Hejaili, Abdelrahim Al Aoufi, Abdullah Al Saidi, and Yasser Al Ginahi
Abstract: The use of e-learning management systems is considered a modern technology in teaching and learning, which helps students gain access to educational resources anywhere and anytime with ease and comfort. E-Learning systems have a significant impact in creating a learning environment suitable for the learner. E-Learning systems are available, some of which are open source and others closed source. Many of these systems were developed to suit the learner’s educational environment. In this research, tools and features are developed and plugged into an open-source e-Learning Management System (LMS) in order to help the learner and provide him with the proper environment, which will contribute to facilitate improvements in the educational process. Dokeos LMS was selected for adding these tools and features which may not be found in most LMS. These tools are suitable for students in Arab societies or any other society: translation tool, the Islamic calendar, student attendance sheet, SMS tool, and adding a new test type i.e. sound test. Therefore, adding these proposed tools to any e-learning management system will help increase the level of communication in the learning environment as well as raise the academic level of university students.Keywords: E-Learning, Learning Environment, Learning management systems.
Using Particle Swarm Optimization and Locally-Tuned General
Regression Neural Networks with Optimal Completion for Clustering Incomplete Data Using Finite Mixture Models
Ahmed R. Abas
Abstract: In this paper, a new algorithm is presented for unsupervised learning of Finite Mixture Models using incomplete data set. This algorithm applies Particle Swarm Optimization to solve the local optima problem of the Expectation-Maximization algorithm. In addition, the proposed algorithm uses Locally-tuned General Regression neural networks with Optimal Completion Strategy to estimate missing values in the input data set. A comparison study shows the superiority of the proposed algorithm over other algorithms in the literature in unsupervised learning of FMM parameters that produce minimum mis-classification errors when used in clustering incomplete data.Keywords: Particle Swarm Optimization, Optimal Completion strategy, Locally-Tuned General Regression Neural Networks, Finite Mixture models, Unsupervised Learning, Incomplete data.
in the open forms of geographic information systems (GIS) An Empirical Study on
the water basin in northern Iraq
Ali Abed Abbas al-Azawy
Abstract: Recent years was
Characterized in the development of information technology and prepackaged
software that contributed to the extraction of information automaticaly. The
most important of these techniques is geographic information systems(GIS) with
its high potential to extract topographical and morphmetric information.
processing and analysis of the digital elevation model(DEM), which includes an
integrated database in the form of (X , Y, Z) from which to determine the
valley basin the drainage network, and defining morphmetric characteristics of
the vallies efficiently and quickly as compared to the traditional manual
mapping, in the present research was used the digital elevation model as data
input to geographic information systems (GIS) to extract features related to
the characteristics of topographical and morphmetric of the valleys in northern
Iraq using the software WMS 7.1and Arcview3.3 and Global Mapper. The research aims to employ modern technology and digital data to
extract information on topographic properties and morphmetric using geographic
Materials and research methods have been adopted HGT radar data in northern Iraq captured by the U.S. space shuttle from NASA as a major source of the data information derived from outputs of the software used is the final process that emerges through the paper results which include topographical featuresand morphmetric. Research found the importance of radar data and geographic information systems to extract information in digital maps.
Study the Effect of the Geometrical Correction of the Satellite image on the GPS Tracking by Using GIS
Sabah Hussein Ali
Abstract: Remotely sensed images provide an overview of the features on the earth surface and help to understand relationships among these features. The raw data of images acquired by the remote sensing systems are geometrically distorted due to the main errors related to the satellite's positioning on its orbit or scanning process. The use of Global Positioning System (GPS) for geometrical correction (rectification) of the satellite imagery aims to establish the relation between the image coordinate system and the GPS readout coordinate system. By using this technique, the errors existing within satellite image can be calibrated and reduced as well. This paper introduces the application Geographical Information System (GIS) and image processing software in addition to GPS for measuring the coordinate of a waypoints to be as a ground control points (GCPs) for the geometrical correction process of the QuickBird satellite image for the adopted study area (Mosul City). For comparison purpose, the geodetic rectification process was also done for the adopted QuickBird satellite image with respect to the IKONOS imagery. Due to the low standards in the geometric design characteristics of the road which were badly affected the GPS measurements, the output results show that the geodetic rectification of the QuickBird imagery with respect to IKONOS satellite image gives more accuracy of results than GCP acquired by GPS. The overall procedure applied in the present study shows the ability to give an improvement in the positional accuracy of an already georeferenced coordinate system of the QuickBird image which is in turn gives a higher accuracy of the GPS tracking path for the purpose of mapping, urban planning, cartography,survey and other GIS applications.Keyword: Geometrical correction, GPS, Tracking path, GIS, QuickBird
Pinus trees distribution mapping in Zawita gully with digital analysis
Abstract: Pine forest identification and discrimination for mapping purpose in Zawita gully is a need. Special care was applied in this study to get best results taking in consideration two reasons:
First , this type of plant cover decreased obviously in the last decades, and there is a shortage of this kind of integrated remote sensing and Gis studies in this native area of pinus brutia in Zawita gully and it's mountains in northern Iraq. Pine locations, areas , and distribution over topography, together with site topography analysis was conducted and the final results was represented as a thematic map.Accuracy assessment for the digital supervised classification was applied on a (TM) Landsat satellite scene acquired in 2000 that cover the study area. Error matrix report about the five classified classes was obtained . Mapping accuracy for each individual class and overall accuracy mapping showed the percentage of confusions among classes .
Mapping accuracy for pine was 94% and over all mapping accuracy was 97%. Final result of topography site analysis and pine thematic map will be helpful tools for forest administrators to make decisions in conserving and regenerating pine forest and to better understanding of the relations that connect pine , climate, and topography that works as a package of site indicators in the study area.Keywords: digital processing, Pinus, Zawita, pinus mapping ,pinus in remote sensing
Using GIS Technique in Hydrological Study Of Wadi Al-Shoar
Basin North Iraq
Taha H. Al-Salim and Mohammed F. O. Khattab
Abstract: The well-developed functions of Geographical Information System (GIS) soft wares could be used as a successful tool in hydrological studies. Two Geographical Information System soft wares, such as Glober Mapper and Arc View 3.3 and image processing (RS) ERDAS Imagine 4.2 software have been used to study the hydrology of Wadi Al-shour watershed.
In order to obtain a true watershed measurements of the study area, the authors were combining multi layers comprises Digital Elevation Model (DEM), false color image satellite, B7 image satellite and topographical map of the study area and other layer resulted from soft wares running.
The results revealed the effective integration between GIS programs used in the study of watershed basin and more over shows the increase in the accuracy of measurements of morphometric characteristics of the studied basin. Unsupervised classification of the study area representing dense vegetation, light vegetation, surface water and outcrop rocks with barren area, were determined. The volume of the surface runoff is also calculated through the application of the soil conservation service-curve number method (SCS-CN) depending on the classified land cover, which is found to be equal to (285.5) mm.
Parallel Hybrid Meta-Heuristic: Genetic Algorithm With
Hill Climbing To Resolve The QAP
Mohammed-Khireddine Kholladi and Ryma Guefrouchi
Abstract: In several practical application domains, combinatorial optimization problem (POC) is an very important. Due to the difficulty of problems optimization and many practical applications that can take the formula of a POC. Currently the major challenge is to resolve a generic POC large, where the use of meta-heuristic is recommended. Their approach "A unique solution", based on the appearance of intensification, quickly gives a good solution, but may be trapped by a local optimum. This above is avoided with the appearance of diversification provided by approach "to the population of solution", the latter has the disadvantage that found solution set is just an approximation of the optimum .this while we can see that the hybridization of two approaches can better guide the process of research to the optimal solution, the mechanism of parallelism is a computing power and can accelerate the process of optimization. To enter this hybridization gains parallel it offers in this paper an application of the latter on the QAP quadratic assignment.
An Analyzing Study of the Distributed Database System Parameters
Khaled Saleh Maabreh
Abstract: A distributed database system consists of a number of sites connected via a computer network, which has a huge amount of data used by an unexpected number of users that increasingly grows. Because of incremental requirements to the information to be available, many parameters may affect the performance of the distributed database systems including the number of sites, the degree of replication and the operation modes. This study aims to analyze the effects of these parameters on the performance of the distributed database system to identify which parameter has the most effect on the system; this study may help in the evaluation of the best configuration and operating environment to enhance the performance and throughput and decrease the delay time..
Mathematical Explanation of the Errors That Occur In Real-Time Systems
Hamid Saghir Saad Al-Raimi and Jamil Abdulhamid Moh'd Saif
Abstract: Systems which do in the real time depend on the time as an important and an essential parameter. Since this type of systems were designed for working in very critical environments which require quick response to the external events according to the specific time without any delaying .Disability of these systems to respond immediately to occurrences is unacceptable doing which lead to many errors because these systems do in critical and more important environment. Occurring of such as these errors cause material and humanity loses in these critical environments, so getting over the temporal limits in responding to occurrences especially the critical ones is unacceptable matter in these systems This research focus on studying the errors which occur in the real time systems resulting of the delaying on responses to the events and processes, and handling them at their specific time. A mathematical model was designed to explain the mechanism of working of these systems which lead us to find out these errors and the causes of their happening. All this will enable us to use the ways that decrease these errors Since if these errors go beyond the allowed limits of the system will make the system unreliable to execute processes in their specific time and the system become unqualified to work in the real time. Being these systems do on controlling on very important places such as power generation stations, nuclear reactors, so the immediate responses for these outer events become very important matter.Keywords: Real-Time System, Characteristic Function, errors in real-time systems, Inverse Fourier Transform.
Features Extraction Techniques of EEG Signal for BCI Applications
Abdul-Bary Raouf Suleiman and Toka Abdul-Hameed Fatehi
Abstract: The use of Electroencephalogram (EEG) signals in the field of Brain Computer Interface (BCI) have obtained a lot of interest with diverse applications ranging from medicine to entertainment. In this paper, BCI is designed using electroencephalogram (EEG) signals where the subjects have to think of only a single mental task. EEG signals are recorded from 16 channels and studied during several mental and motor tasks. Features are extracted from those signals using several methods: Time Analysis, Frequency Analysis, Time- Frequency Analysis and Time-Frequency-Space Analysis. Extracted EEG features are classified using an artificial neural network trained with the back propagation algorithm. Classification rates that reach 99% between two tasks and 96% between three tasks using Space- Time-Frequency-analysis and Time-Frequency-analysis were obtained.
Enhancement of E-Government Security Based on Quantum Cryptography
Sufyan T. Faraj Al-Janabi and Ali M. Sagheer
Abstract: The increasing set of e-government services have been already set up and are still emerging. But these new services on the internet mean new security challenges to the operators. They are supported by executive orders which draw up requirements in connection with the security level of e-government services. SSL/TLS is the protocol that is used for the vast majority of secure transactions over the Internet in e-government systems. However, this protocol needs to be extended in order to create a promising platform for the integration of quantum cryptography (QC) into the Internet infrastructure. Hence, unconditionally secure services can be offered by e-government applications. In order to achieve this objective, this paper introduces the issue of integration of QC into the e-government security architecture. This can be done based on a novel extension of SSL/TLS that significantly facilitates such type of integration. This extended version of SSL/TLS was called QSSL (Quantum SSL). During the development of QSSL, a concentration was made on the creation of a simple, efficient, general, and flexible architecture that enables the deployment of practical quantum cryptographic-based security applications. QSSL also efficiently supports unconditionally secure encryption (one-time pad) and/or unconditionally secure authentication (based on universal hashing). Besides enabling e-government systems to offer unconditionally secure services, QC has the ability to enhance the traditional computationally-secure e-government services.Keywords: E-Government, Quantum Cryptography, SSL/TLS, Key distribution, One-time pad, Unconditional security.
Is Saudi Arabia Ready For E-Learning? – A Case Study
Farah Habib Chanchary and Samiul Islam
Abstract: The steady growth of e-learning around the world is inspiring many educational and business institutions to adopt the same. In order to benefit from e-learning, educational institutes should first conduct investigation to assess the learners' readiness. This paper analyses a small scale readiness evaluation case study for three groups of learners of a Saudi Arabian University. Statistical analysis and data mining tools have been used to find correlations among the technical ability, learning ability, time management ability and preferred mode of study of these learners. Our investigation shows that majority (73%) of the students still prefer classroom teaching to individual study.Keywords: e-learning, e-learning readiness, readiness evaluation measurement
Mobile Learning in Saudi Arabia - Prospects and Challenges
Farah Habib Chanchary and Samiul Islam
Abstract: The continuing expansion of broadband wireless networks and the explosion of power and capacity of the next generation of cellular telephones make it evident that mobile telephones, a familiar tool for communications, have immense possibilities for teaching, learning, and research in work places as well as in educational institutions. This paper reviews the prospects and technological challenges of mobile-learning in Saudi Arabia (SA). An analysis of questionnaire survey findings has been presented to measure students' attitudes and perceptions towards the effectiveness of mobile learning. A total of 131 students from undergraduate level of a Saudi Arabian university participated in this study. More than 75% students show positive attitudes towards mlearning due to the flexibility of learning methods and timings, and improved communications among learners.
Relevance of Remote Sensing and Geographic
Information in Assessing Sustainable Groundwater
Resource.─ A Case of Kedah and Perlis States, Malaysia
K.A.N. Adiat, M.N.M. Nawawi, and K. Abdullah
Abstract: Groundwater abstraction for irrigation had been proposed in the study area. The effects of three hydrogeological indices, namely lineament density, drainage density and lithology, on the occurrence of groundwater were examined and evaluated. The Multi-criteria Decision Analysis technique was used to assign weight to each index in the context of the Analytic Hierarchy Process. This was based on the degree of influence of each of the indices in controlling the groundwater storage potential in the area. The assigned weight was normalized and the consistency ratio was established to be within the acceptable range of values of less than 0.1. The normalized weight and the probability rating of each index were integrated to produce the groundwater prediction map for the area of study. The area was zoned into five classes of groundwater potential. It was however observed that the groundwater potential of parts of the area where the groundwater abstraction for irrigation had been proposed was fairly limited. Borehole records obtained from the Malaysian Department of Mining and Geosciences and those that were drilled during this study were used to validate the prediction map. From the yield results obtained from the boreholes, the accuracy of the groundwater potential prediction map produced was estimated to be 75%. The study has established that the methods adopted for this study yielded good results and provided adequate information on the groundwater resources occurrence in the area.
Tele-monitoring the navigation of a Set of Robots
Using Multiprocessors with a New Voice
Mohamed Fezari, Mohamed Larbi Saidi, Hamza Atoui, and Sadek Lemboub
Abstract: A project based on the implementation of a new processor in tele-operating the navigation of a colony of robots is described in this work. The adopted design is based on grouping a DSP processor for speech enhancement with a new voice recognition module for isolated word with speaker dependent and a microcontroller. The resulting design is used to control via master-slave protocol a set of small mobile robots from Pob -technology based on a vocal command. A way to gain in time design, experiments have shown that using Kits, is the best. More over, A DSP processor is integrated in order to enhance the quality of speech signal by reducing noise and echoes. The input of the system is a sentence of some spotted Arabic words used to control the objects and movements of a set of mobile robots. The output is a corresponding command byte sent by Bluetooth module to the microcontroller server that command the mobile robot direction. Since the system is an embedded device developed in order to be portable. Therefore it should be easy to carry and use, with low power consumption, thus the choice of power less consumption processors.
Shapes Matching and Indexing using Textual Descriptors
Saliha Aouat and Slimane Larabi
Abstract: We propose in this paper a new matching and indexing method of shapes. Models of objects silhouettes are stored in the database using their textual descriptors. As we will see, XLWDOS descriptors are sensitive to noise. We propose a “reduction technique” to process noisy shapes and match corresponding XLWDOS descriptors using only “textual transformations”. The matching algorithm we propose is an efficient way to index shapes descriptors. Experiments over real images are conducted and explained.Keywords: XLWDOS, noise, reduction technique, matching, textual transformation.
A Graph Grammar Approach for calculation of Aggregate Regions
Hiba Hachichi, Ilham Kitouni and Djamel-Eddine Saidouni
Keywords: Formal Verification, Graph transformation, DATA*, regions automata, aggregate regions automata, AToM3.
An Overview and Evaluation of Indices Based on Arabic Documents: an Overview
Abstract: This research will give the comparison among inverted file, signature file, suffix array and suffix tree based on Arabic documents to evaluate the performance, in terms of efficiency and effectiveness, Time needed to retrieve the document and the memory size needed to create the required indices (space) are two factors that affect such performance. The performance will be measured based on precision and recall, after building and comparing the four criteria by using a collection of 242 Arabic abstracts and by building a collection of 60 Arabic queries. After running the system, the inverted file shown an advantages over the other techniques, while suffix array technique shown an advantages over the other two technique, which had, nearly the same results.Keywords: Information retrieval, Arabic document, Indices, Recall, Precision
A General Framework to Bridge the Gap Between
Conceptual System and Abstract System in Software
Mohamed Ali ElShaari, Mohamed Ali Hagal, and Zainab Saad Elbadry
Abstract: The goal of software development is to construct systems that can be implemented on computers, but that faces many obstacles, the most important one is that the analyses and design are hard and sometime impossible to implement to computer. This paper is designed as attempt to define an analytical framework, which can synchronize the work between the computer and this problem. It uses general systems theory and formality to build a formal understanding conceptual system for the aimed problem, and then build software architecture interpreted in a formal language which it expresses the components of software architecture. Therefore, the work is designed to deduce that problem. It may be solved if the framework is followed; the computable system reached by using formal language. The computable system supposed to have the ability to be implemented to a computer. A relevant example is a case study about an Electronic Market, which gives an image that clarifies how to follow the framework to solve a software development problem.Keywords: Abstract level, Formal, Architecture, OWL, Problem, Ontology, UML.
Two Multi-Class Approaches For Reduced Massive Data Sets Using Core Sets
Lachachi Nour-Eddine and Adla Abdelkader
Abstract: Current conventional approaches to biometric development for Text Independent Speaker Identification and Verification system present serious challenges in computational complexity and time variability. In this paper, we develop two approaches using SVMs which can be reduced to Minimal Enclosing Ball (MEB) problems in a feature space to produce simple data structures optimally matched to the input demands of different background of systems as UBM architectures in speaker recognition and identification systems. For this, we explore a technique to learn Support Vector Models (SVMs) when training data is partitioned among several data sources. Computation of such SVMs can be efficiently achieved by finding a corset for the image of the data in the feature space cited above.
Model Performance Improvement with Least Square Method in Highly Imbalanced and Correlated Datasets
Abstract: The need for accurate, complete and quality data is still a problem in every domain for model development. However, with the volume of stored information growing every day and the necessity of its integration for data analysis and data mining, the need for domain specific applicable methods to overcome the problem is obvious. With our experimental work in this paper the idea is to use Least Square method (LS) to generate artificial data and to show the affect on model improvement. With highly correlated numerical features in imbalanced datasets in any domain the method is applicable.Keywords: Least Square Method, Highly Correlated, Model Improvement, Artificial Data.
Auditing For Standards Compliance in the Cloud: Challenges and Directions
Abstract: Cloud computing has recently grown into prominence as one of the most attractive computing paradigms used for businesses to cut costs, while simultaneously gaining the ability to dynamically allocate the technology resources that meet their needs. Despite its attractiveness, however, many organizations remain reserved due to concerns about the security and auditability of cloud environments. In this paper, we evaluate some of the unique security challenges created by cloud computing and how these challenges impact auditing towards standards compliance. We examine the notion of audit as it is currently being used by surveying available provider APIs and new standards for publishing audit data. Our research has concluded that, while there are some promising efforts underway, current efforts by cloud providers being termed as audit, still fall short of addressing some of the most pressing concerns of their customers related to multiple issues.Keywords: Cloud Computing, Audit, IT Standards Compliance.
Obstacle Detection with Stereo Vision based on the Homography
Nadia Baha and Slimane Larabi
Abstract: In this paper, we propose a simple method of obstacle detection that enables a mobile robot to locate obstacles in the indoor environment using an images pair from uncalibrated cameras. Using a set of features points that have been matched between the two views using the ZNCC correlation, a robust estimate of the homography of the ground is computed. The knowledge of this homography permits us to compensate for the motion of the ground and to detect obstacles as areas in the image that appear not stationary after the motion compensation. The resulting method does not require camera calibration, does not compute a dense disparity map and avoids the 3D reconstruction problem. This approach allows us to detect several numbers of obstacles of varied shapes and sizes. This obstacle detection stage can be viewed as the first stage of a free space estimator which can be implemented in an autonomous mobile robot.
Categorization of a Song Using an Ant Colony Algorithm
Nadia Lachtar and Halima Bahi
Abstract: Every day, the available mass of information other than text (such as image, video and sound) increases. These audio files may contain speech, music or other sounds are diverse and increasingly common in individuals of major media: radio, television and internet. This information would be irrelevant if our ability to efficiently access did not increase as well. For maximum benefit, we need tools that allow us to search, sort, store, index, update and analyze the available data. We also need tools helping us to find in a reasonable time the desired information. One of the promising areas is the audio indexing databases. This paper addresses the problem of indexing database containing songs to enable their effective exploitation. Since, we are interested with songs databases, it is necessary to exploit the specific structure of the song in with each part plays a specific role. We propose to use the title and the artist particularities (in fact each artist tends to compose or sing a specific genre of music). In this article, we present our experiments in automated song categorization, where we suggest the use of an ant colony algorithm. A naive Bayes algorithm is used as a baseline in our tests.
A Structured Approach for Extracting Functional Requirements from Unclear Customers
Mohammed A. Hagal and Omar M. Sallabi
Abstract: Many challenges are facing the developers during specification of the requirements for new systems. The errors in the requirements that detected in last stages of the system (such as implementation stage) will be very expensive to correct because it may require rework effort. Such errors sometime occur when customers do not have ability to articulate their requirements or developers make implementation compromises in order to get working prototype rapidly which might cause inappropriate design decisions and inefficient algorithms. Thus, effective management of extracting requirements is essential. In this paper we propose a guideline approach consisting of some managed stages aim to help in extracting precise software requirements.Keywords: Software engineering, Requirement engineering, UML, Prototyping.
Data Mining Techniques for Medical Data Classification
Emina Aličković and Abdulhamit Subasi
Abstract: Data mining is information extraction from database. In this paper, we use data mining techniques to get correct medical diagnosis. In this study different techniques presented to get better accuracy using data mining tools such as Bayesian Network, Multilayer Perceptron, Decision Trees and Support Vector Machines (SVM). By using SVM, we achieved 97.72 % accuracy with WDBC dataset.Keywords: Decision Tree, Support Vector Machine, Multilayer Perceptron, Bayesian Network, Breast Cancer Diagnosis.
The Ability of Remote Sensing Techniques in the Search for Groundwater
Abd elbagy Mustafa
Abstract: The main objective of this research work is to Evaluate the potentially of the remote sensing technique to detect the ground water in the study area.To achieve this objective a satellite image acquired by the American system (landsat-7) covering the study area was selected . ERDAS IMAGIN , as a digital imaging processing application program was used to enhance , georeference and classify this image according to the land cover of the study area. The classified image was subjected to a comparison process with the geophysical methods for ground water detection.The analysis of this comparison shows that, the remote sensing and digital image processing techniques provide a very power full tool for ground water detection.
An Assistive Computerized System for Managing and Educating Children with Moderate and Mild Intellectual Disabilities at Shafallah Center in the State of Qatar
Moutaz Saleh and Jihad Aljaam
Abstract: In spite of the current proliferation of the use of computers in education in the Arab world, complete suites of solutions for students with special needs are very scarce. This paper presents an assistive system managing learning content for children with moderate to mild intellectual disabilities. The system provides educational multimedia content, inspired from the local environment, in different subjects such as Math, Science, Shariaa, daily life skills, and others to target specific learning goals suitable for this group of learners. The system tracks the individual student progress against the student individualized learning plan assigned by the specialized teacher and according to the learner abilities. Upon completion of learning a particular task, the system will test the learner to order a set of sub-tasks in its logical sequence necessary to successfully accomplish the main task. The system also facilitates deploying intelligent tutoring algorithms to automatically correct mistakes after a number of trials working adaptively (hand-in-hand) with the learner to successfully learn how to complete the task.Keywords: Intellectual Disability, Computerized Learning, Multimedia Contents, Personalised Learning
A Proposed GIS-Based Decision Making Framework for Tourism Development Sites Selection
Mohammed A. Al-Amri and Khalid A. Eldrandaly
Abstract: Building a new tourism facility is a critical decision made by private and public owners. Determining facilities locations is critical to the success and failure of such investments. The selection of a tourism development site involves a complex array of decision factors involving economic, social, technical, and environmental issues. In the process of finding the optimum location that meet desired conditions the analyst is challenged by the tedious manipulation of spatial data and the management of multiple decision-making criteria. Geographic information systems (GIS), Multicriteria Decision Making techniques (MCDM), and Expert Systems (ES) are the most common tools employed to solve sitting problems. However, each suffers from serious shortcomings and could not be used alone to reach an optimum solution. This paper presents a new decision making framework in which ES, GIS and MCDM techniques are integrated systematically to facilitate decisionmaking regarding selection of suitable sites for building tourism facilities.Keywords: Tourism Site Selection, GIS, ES, MCDM.
Analyzing the Performance of a Dynamic Access Control System with Integrated Risk Analysis
Abstract: Conventional approaches for adapting security enforcement in the face of attacks rely on administrators to make policy changes that will limit damage to the system. Paradigm shifts in the capabilities of attack tools demand supplementary strategies that can also adjust policy enforcement dynamically. In previous studies, we have proposed an approach for integrating real-time security assessment data into access control systems to facilitate dynamic enforcement methodologies. One significant question surrounding systems that process and analyze data in real-time usually concerns the performance of the system. In order to demonstrate the feasibility of the proposed system, here we present a detailed performance analysis contrasting a normal Apache webserver with other Apache webservers which have been augmented with a real-time risk analysis system. Using this data, we are able to draw conclusions regarding the performance constraints of incorporating real-time data in access control policy evaluation and demonstrate the ability of the proposed system to maintain high request throughput.Keywords: Access Control, Vulnerability Assessment, Risk Analysis
An Optimal General Nonlinear Trend for Fuzzy Time Series Forecasting Based On Intervals Fuzzy Rules Based
Saleh Hussein Awami, Youssef Hamed Shakmak, and Samira Mohamed Boaisha
Abstract: Recently fuzzy time series forecasting frameworks models have been developed by using different techniques in order to improve the forecasting accuracy rate. Most researches are presented with less forecasting accuracy rate. In this paper, the approach of Meredith Stevenson and John E. Porter is modified by adding an Optimal General Nonlinear Trend (OGNT) of the year to year percentage change as the Universe of Discourse (UoD). Our proposed model improves the Kth order; time-invariant and time-variant models which based on frequency density based portioning that presented by Jilani, Burnery and Ardil. Nine fuzzy rules are applied on each portioned interval fuzzy set in order to obtain the 1st order general nonlinear trend, the 2nd order general nonlinear trend and the 3ed order general nonlinear trend according to 7-intervals fuzzy sets, 13-intervals fuzzy sets and 17-intervals fuzzy sets by applying a triangular membership function. It’s found that, the 3ed order general nonlinear trend is the optimal general trend for the universe of discourse. The proposed model is compared with existing forecasting models using the enrollment figures for University of Alabama. This proposed model shows better forecasting accuracy rate than exiting models. The major goal of this work is to present a simple framework for forecasting fuzzy time series with high forecasting accuracy rate.Keywords: An Optimal General Nonlinear Trend, Forecasting Time Series, Interval Fuzzy Rules, Time Variant and Time-Invariant models, High Order Partitioning, Soft Computing
On Internet Multicast Architectures: Fully Distributed and Hierarchical Vs Service-Centric
Omar Said, Ahmed Ghiduk, Sultan Aljahdali
Abstract: There are two approaches for the multicast routing architecture. The first approach is a traditional multicast architecture that constructs and updates the multicast tree in distributed manner. The second and most recent approach is called service-centric, in which there are two types of routers. Efficient router, which is called m-router, handles many to many multicast functions. The other routers are called i-routers that handle only minimum multicast functions. This approach has drawbacks, originating from the centralization idea. This paper proposes two approaches that enhance the performance of the service-centric architecture; hierarchical architecture and fully distributed architecture. In our proposed architectures, the service-centric m-router is divided into three sub m-routers. The functions of each sub router are determined. How these routers communicate with each other to build the multicast tree is demonstrated. Management of the multicast tree in the new architectures is showed. How the new architectures recover the drawbacks of the current approaches is clarified.
Challenges of Information Era: Teachers’ Attitude towards the Use of Internet Technology
Muhammad Safdar, Irshad Hussain, Muhammad Abdul Malik, Kamran Masood and Muhammad Yaqoob
Abstract: Main purpose of the study was to assess the use of Internet and factors affecting the effective use of this technology in teacher training. A sample of 300 teachers of BEd, MEd and MA Education was taken randomly. The researcher used questionnaire as a research tool for data collection and collected data were analyzed through SPSS XIV. The results of the study revealed that teachers have positive attitude towards the use of Internet technology. They use this technology frequently for preparing lectures, presentations, handouts, giving feedback to students,checking students’ assignments, communication with students, searching conferences and publishing papers. However, lack of hardware, lack of training, lack of quality software, power failure and lack of technical support were main barriers in the effective use of this technology.Keywords: Teachers’ Attitude, Internet, Instructional Use, Barriers
Using Inconsistencies to Assess the Degree of Progress in Development Process
Randa Ali Numan Khaldi
Abstract: In software systems some degree of uncertainty or inconsistency is tolerated even in the final product. In such cases, there is a need to measure or evaluate even estimate the impact of these inconsistencies on software artifacts, or the frequency of failures in developing and completing the projects. However developers often need to know the number and severity of inconsistencies in their descriptions, how various changes that they make affect these measures And if they can measure the effectiveness of development process. In this paper we define different algorithms for: measuring number of inconsistencies detected in each: requirements; stage of the development process and in the whole development process; degree of risk after handling; degree of progress for each cycle and each stage and in the whole development process. These algorithms help requirement engineer to revise and reanalyze his work in certain stage or decide if he had to go back and track his action which cause such changes, consequently this will slow the development process and will increase the time remaining to accomplishing the development process which will in its term increase the cost for fixing this problem.Keywords: impact on software artifacts, handling inconsistency, degree of risk, managing inconsistencies, degree of progress.
3D Mapping of an Unknown Environment by
Cooperatively Mobile Robots
Ashraf S. S. Huwedi
Abstract: Some of the important future applications of mobile robot systems are autonomous indoor and outdoor measurements of building, factories and objects in three dimensional view. A goal of such measurement is to provide us with a detailed map of the environment with the interesting characteristics. This map can then be transferred into a model, which represents the measuring objects. By following an automated proceeding, an autonomous measurement robot could be useful in order to extract independently the map and the model of the environment. The environmental model can then be directly used by the autonomous mobile robots for navigation. A further increase of the effectiveness in map production can be achieved, if the environment is mapped by several robots, or whole robot fleets. Such an approach can be advantage, for example, by reducing the exploration time. This paper develops the framework for 3D mapping using multimobile robots. Two mobile robots equipped with different sensors capabilities are used. The main contribution of the paper is to autonomously build a 3D map of indoor environments within a good exploration time by using multimobile robots. Employing multiple autonomous robots with different types of sensors, two different algorithms are presented in this paper. The first is an algorithm for natural feature extraction using stereo camera in order to build a 3D feature-based map. The second is an algorithm to extract geometrical features from range images in order to build a 3D model of the environment. The algorithms and the framework are demonstrated on an experimental testbed that involves a team of two mobile robots. One of them is working as a master equipped with stereo camera. Whereas the second is involved as a slave equipped with a rotated laser scanner sensor.Keywords: Multi-Mobile Robots, Cooperation of Robots, 3D Mapping and Exploration, Image Processing
Towards a Security Meta-model for Software Architectures
Mohamed Gasmi, Makhlouf Derdour, and Nacéra Ghoualmi Zine
Abstract: Security is becoming a very important concern for distributed application architectures. Previous modeling approaches provided insufficient support for an in-depth treatment of security. There is currently no generic solution that can automatically deploy the security techniques at the creation of the software architecture. The identification of security requirements during the assembly of software components is necessary in such approaches. Indeed, software architectures validate the functional aspects, which are insufficient to ensure a realistic assembly to remedy the problem of security. Facing the new challenges of security for distributed software application and giving the base that is provided by existence software architecture research, we propose a model based approach called Security Meta-model of Software Architecture (SMSA). Our model is focused on semantically rich software connectors that provide communication and secure the exchange of information between distributed components in the same configuration.Keywords: Component, Connector, Security, Non-functional requirements, software architecture.
Pachinko Allocation Model with Image Local
Features for Image Retrieval Tasks
Ahmed Boulemden and Tlili-Guiassa Yamina
Abstract: Pachinko Allocation Model, Latent Dirichlet Allocation and other topic models are popular tools used in text modeling and also increasingly in image processing field, especially for object recognition and image retrieval tasks. We present in this paper the experiment on which we are working under the domain of image indexing and retrieval. This experiment consists on using Pachinko Allocation Model (PAM) with local image features extracted by Scale Invariant Feature Transform (SIFT) technique in content-image retrieval task. The experiment is a part from our work which focuses on the use of Pachinko Allocation Model with local, global, and a fusion of local/global features of images in image indexing and retrieval field.
Electronic Shopping Beahvior in Mobile Commerce Context: An Empirical Study
Tarek Taha Ahmed
Abstract: Today customers are able to shop via the wireless Internet, using web browser as an e-shopping channel to access retailers’ websites. Although most available studies have theoretically proposed the relationships between different variables and e-shopping behavior, comparatively little research has tested this phenomenon empirically in the m-commerce context. Another key limitation of the existing literature concerns their focus on developed countries, while the worldwide growth of m-commerce has shown the need to extend this research to other unstudied developing countries with different cultures and from different perspectives. Thus, the current paper is one more attempt to fill these gapes in the current body of literature. This study aimed to propose a model for examining and validating empirically the critical factors that have the most significant influence on customer’s behavioral intention to use wireless Internet as an e-shopping channel. In contrast to previous works, the current empirical study extended the research scope by combining the most critical factors identified in literature and applied them in the local context, therefore our model contained variables that have not been integrated into one framework, to examination simultaneously for validation and relationship.
Arabic Opinion Mining Using Combined Classification Approach
Abstract: In this paper, we present a combined approach that automatically extracts opinions from Arabic documents. Most research efforts in the area of opinion mining deal with English texts and little work with Arabic text. Unlike English, from our experiments, we found that using only one method on Arabic opinioned documents produce a poor performance. So, we used a combined approach that consists of three methods. At the beginning, lexicon based method is used to classify as much documents as possible. The resultant classified documents used as training set for maximum entropy method which subsequently classifies some other documents. Finally, k-nearest method used the classified documents from lexicon based method and maximum entropy as training set and classifies the rest of the documents. Our experiments showed that in average, the accuracy moved (almost) from 50% when using only lexicon based method to 60% when used lexicon based method and maximum entropy together, to 80% when using the three combined methods.Keywords: Opinion Mining, Sentiment Classification, Combined Classification, Arabic Opinion Mining.
WSN-based Support for Irrigation Efficiency Improvements in Arab Countries
Ali AL-Hamdi, Ahmed Monjurul Hasan, Muhammad Akram
Abstract: Arab countries suffer from an acute water scarcity where most of its parts depends on underground resources on water consumptions. Among the different consumptions, agriculture is the sector that demands the highest percentage of water for irrigation. Anthropogenic factors and mismanagement of irrigation process play a significant role to make the water situation more severe. Nevertheless, with proper supporting tools, the irrigation efficiency can be improved. The work in this paper aims at proposing a contextual architecture model utilizing WSN technology. The ultimate goal of this model is to support the operation and management of irrigation technologies and the irrigation stakeholders’ activities as well.
Hydrology Modeling for Runoff Harvesting of Tarow valley Using GIS
Ali Abed Abbas and Zakaria Yahya Khalaf
Abstract: Hydrology Modeling for Runoff Harvesting of Tarow valley Using GIS. The research aims to evaluate the amount and harvesting of runoff of Tarow valley, that is considered from ungaged valleys and locate in western south of Sinjar mountain in western north of Iraqi, nearby Syrian Iraqi borders, by placing barriers as(rock fill and earth dam)to obstruct the water flow in these valleys and increasing the recharge of ground water and to improve its quality in the area ,the hydrology modeling pattern have used during linking between two computer models the first to change the basin properties into digital maps by using GIS ,the second by using Watershed Modeling System(WMS) to limit the boundaries of valleys for area study and the morphology proprieties, besides to estimate the volumes and max.discharge for single rain storms,depending on assumption of American Soil-Conservation Service(SCS).
Finite Difference Methods for Numerical Simulations for 1+2 Dimensional NLS Type Equations
Thiab R. Taha and Wei YU
Abstract: The nonlinear Schrödinger equation is of tremendous interest in both theory and applications. Various pulse propagation in optical fibers are modeled by some form of the nonlinear Schrödinger equation. In we introduce sequential and parallel finite difference methods for numerical simulations of the 1+ 2 dimensional nonlinear Schrödinger type equations. The parallel methods are implemented on the rcluster multiprocessor at the University of Georgia (UGA). Our preliminary numerical results have shown that these methods results and considerable speedup. In this paper, we implement two finite difference schemes for the numerical simulation of the 1+2 dimensional NLS equation (1). Also, we implement the parallel version of these schemes.Keywords: Finite Difference methods, PDEs, Nonlinear Schrödinger Equation, Parallel algorithms.
Analytical Study of the Effect of Transmission Range on the Performance of Probabilistic Flooding Protocols in MANETs
Muneer Bani Yassein, Qusai Abuein, Mohammed Shatnawi, Deya Alzoubi
Abstract: Broadcasting is one of the most important operations that are used in Mobile Ad Hoc Networks (MANETs) to disseminate data throughout the entire network. Simple flooding is the conventional operation that performs broadcasting in MANETs. Although flooding is a simple operation that achieves high delivery of data, it has many disadvantages summarized by the redundant broadcasts, contention and collision, which are referred to as the broadcast storm problem. Probabilistic protocols stand to provide a good solution to the problems associated with simple flooding .This paper, presents a comprehensive analytical study for the performance of probability-based routing protocols under different transmission ranges, and it shows the effect of this parameter on the overall performance metrics. All experiments are conducted using NS-2. The results show that when the transmission range values and number of Probability P increases the performance of the multiple-P’s algorithms improved, where the protocol with higher P value (P4) outperforms all other protocols in terms of Packet Delivery Ratio (PDF), End-To-End Delay (ETED) and the routing overhead.Keywords: MANET, broadcasting, flooding, probabilistic flooding.
Decrypting the Ciphertexts of RSA with Public-Key
Lakhdar Derdouri and Noureddine Saidi
Abstract: The RSA is based on a trapdoor one-way function which is easy to compute but is most hard to revert without knowing the trapdoor. A cryptanalysis, presented in this paper, consists in finding a new decrypt key which plays the same role of the original trapdoor. To find this new decrypt key we must seek the maximum degree of ciphering function composition in a given modulus N. The maximum degree (d_max) is obtained by applying the ciphering function to a restricted set of residues in the modulus N. We then define the new decrypt key by (e^d_max). Thanks to this new key, we can decrypt any ciphertext for a given modulus. The interest of this cryptanalysis, contrary to factorization, is that the search of the decrypt key is independent from the modulus size.Keywords: Cryptology, Cryptanalysis, RSA Cryptosystem, Trapdoor One-way Function
Towards an Approach for the Security of Information Systems with UML
Salim Chehida and Mustapha Kamel Rahmouni
Abstract: With the fulgurating growth of the world of telecommunications, pulled by Internet and stimulated by the penetration of transmission technologies, the problems of processes and data security have currently become of paramount importance. Transactions through networks can be intercepted, above all since adequate legislation has not yet been fully enforced on the Internet. Alone, the functional specification of the information systems (IS) is not enough, for the design and the realization of these systems must take into account, in addition to the functional needs, various security requirements. Taking into account, in the modeling process, the various security constraints (Availability, Authentication, Integrity, Secrecy, Non-Repudiation, etc.) constitutes one of the principal challenges for the designer of these systems. UML is the standard language for the modeling of the multiple views of an information system by using its various extension mechanisms. In this paper, we propose new UMLextensions for the modeling of computer security requirements as well as a new development process (the X process) which takes into account the security constraints of an IS in addition to its functional needs, and also the changes and evolution of the technical architecture of the systems.Keywords: Modeling, UML, UMLsec, Development Process, Security of IS, Software Engineering.
Expert System for Islamic Punishment (ESIP)
Yasser A. Nada and Sultan Aljahdali
Abstract: It is undesirable to treat a criminal lightly who threatens the security of society with danger. This paper presents the design and development of an expert system for Islamic punishment (ESIP). The knowledge base in the ES is intended to obtain based on the Al-Quran and the Prophetic sayings (Hadith). The Islamic penal system has many objectives. The First objective: Islam seeks to protect society from the dangers of crime. The Second objective: Islam seeks to reform the criminal. The Third objective: the punishment is a recompense for the crime. We implement our expert system using powerful and famous expert system shell called EXSYS CORVID Java-Based Expert System Knowledge Automation. We apply the expert system to be worked on the Web directly. Our expert system outperforms the efficiency of the Islamic punishment manual system.Keywords: Expert System, Islamic punishment, EXSYS CORVID
Multi-Phase Detection Technique for Removing Clusters of Impulse Noise
Ali Said Awad
Abstract: In this paper, multi–phase detection (MPD) technique is proposed to restore images corrupted by clusters of impulse noise. The detection process is carried out through multi iterations each with a different threshold and window size. For a pixel to be considered as an original pixel it should have in each phase a sufficient number of similar pixels among its neighboring pixels. The algorithm is terminated when the number of the detected pixels becomes almost constant or when there is no improvement occurs in the quality of the restored image. Extensive simulations demonstrate that the MPD method delivers superior performance than the other existing methods. MPD approach suppresses the noisy cluster efficiently, and at the same time preserves the image details.Keywords: Impulse Noise, Image Denoising
Business Intelligence Maturity Models: Toward New
Essam Shaaban, Yehia Helmy, Ayman Khedr, and Mona Nasr
Abstract: Business Intelligence (BI) has become one of the most important research areas that help organizations and managers to better decision making process. This paper aims to show the barriers to BI adoption and discusses the most commonly used Business Intelligence Maturity Models (BIMMs). The aim also is to highlight the pitfalls of these BIMMs in order reach a solution. Using new techniques such as Service Oriented Architecture (SOA), Service Oriented Business Intelligence (SOBI) or Event Driven Architecture (EDA) leads to a new model. The proposed model named Service-Oriented Business Intelligence Maturity Model (SOBIMM) is briefly described in this paper.
Analysis of the Stagnation Behavior of the Interacted Multiple Ant Colonies Optimization Framework
Alaa Aljanaby and Ku Ruhana Ku-Mahamud
Abstract: Search Stagnation is a common problem that all Ant Colony Optimization (ACO) algorithms suffer from regardless of their application domain. The framework of Interacted Multiple Ant Colonies Optimization (IMACO) is a recent proposition. It divides the ants’ population into several colonies and employs certain techniques to organize the work of these colonies. This paper conducts experimental tests to analyze the stagnation behavior of IMACO. It also proposes the idea that different ant colonies use different types of problem dependent heuristics. The performance of IMACO was demonstrated by comparing it with the Ant Colony System (ACS) the best performing ant algorithm. The Computational results show the superiority of IMACO. The results show that IMACO suffers less from stagnation than ACS.
Automatic Classification of Questions into Bloom's Cognitive Levels Using Support Vector Machines
Anwar Ali Yahya and Addin Osman
Abstract: In recent years, E-learning has increasingly become a promising technology in educational institutions. Among numerous components of E-learning systems, question bank is a primordial component. Question bank is a repository of questions that assists students and instructors in the educational process. In question bank, questions are annotated, stored and retrieved based on predefined criteria such as bloom's cognitive levels. Definitely, for question bank management, the automatic classification of questions according to Bloom's cognitive levels is of particular benefit. This paper explores the effectiveness of support vector machines (SVMs), in tackling the problem of question classification into Bloom's cognitive levels. To do so, a dataset of pre-classified questions has been collected. Each question is processed through removal of punctuations and stop words, tokenization, stemming, term weighting and length normalization. SVMs classifiers, namely linear kernel, have been built and evaluated on approximately 70% and 30% of the dataset respectively, using SVM-Light software package. The obtained preliminary results show a satisfactory effectiveness of SVMs with respect to classification accuracy and precision. However, due to the small size of the current dataset, the results of the classifiers' recall and F-measure suggest a need for further experiments with larger dataset to obtain conclusive results.Keywords: E-learning, Question bank, Text classification, Bloom's taxonomy, Machine learning.
Analysis of Factors That May Affect the Uses of Knowledge Management in Sudanese Companies
Nour Eldin Mohamed Elshaiekh, Khalid Ahmed Ibrahim, and Fahima Omar Mchulla
Abstract: Knowledge management is a major bottleneck in the information system for any organization. Knowledge management can provide efficient support in an organization’s restricted domain; however, the principal barrier to the success of such a system comes from the users’ perspectives of the system. This paper reveals the factors that affect the use and effectiveness of knowledge management in Sudanese companies. Through the systems analysis and design methodology employed in this study, factors affecting both the success and failure of knowledge management were identified by users from different levels of the organization. The study concludes that the success factors include good infrastructure, knowledge sharing, familiarity with information and communication technology, staff experience, behaviour of decision makers, and qualifications and training. Failure factors include poor infrastructure, high cost, lack of security, globalization, growth of information and communication technology, and job overlap.Keywords: Knowledge Management, System Analysis, design Methodology, Sudanese Company.
Arabic Alert E-Mail Detection Using Rule Based Filter
Qasem A. Al-Radaideh and Ahmed F. AlEroud
Abstract: This paper utilized the performance of the rule-based filter for detecting Arabic alert e-mail. Alert e-mails are those e-mails related to criminal or terrorist activities which are of a great interest for both security agencies and people. A set of Arabic e-mails have been collected, pre-processed, and normalized. The useful features were extracted from the collected e-mails by involving categorical proportional difference (CPD) and term frequency variance (TFV) as features weighting methods for the rule-based filter. As a result, the rule based filter has achieved good accuracy results where it was able to detect about 85% of alert e-mails used in the experiments.
Audiovisual Document Modeling By Metadata
Héla Elleuch, Ameni Yengui, and Mahmoud Neji
Abstract: Information query in multimedia resources is based on processes facilitating access to information regardless of their heterogeneity. These resources are the subject of pedagogic information processing by video conferencing which constitutes an interesting area in our daily lives. Some work has studied the generic modeling of multimedia documents by metadata. Other studies have relied on models that aim to bring the total composition of each document. Thus, many works were based on the annotation of documents. Modelling of multimedia documents was a focus for several authors in the literature. In this paper, we study a modelling tool for video conferences in medicine that decomposes them into different media such as text, image and audio. For each medium, we define the necessary metadata so that a lay person who inquires about details of a medical video conference can have the information needed through our tool.
A Framework for Collecting Clientside Paradata in Web Applications
Natheer Khasawneh, Rami Al-Salman, Ahmad T. Al-Hammouri, and Stefan Conrad
A Fuzzy Local Search Classifier for Intrusion Detection
Dalila Boughaci, Samia Bouhali, and Selma Ordeche
Abstract: In this paper, we propose a fuzzy local search (FLS) method for intrusion detection. The FLS system is a fuzzy classifier, whose knowledge base is modeled as a fuzzy rule such as "if-then" and improved by a local search metaheuristic. The proposed method is implemented and tested on the benchmark KDD'99 intrusion dataset. The results are encouraging and demonstrate the benefits of our approach.
Digital Image Encryption Algorithm Based on a Linear Independence Scheme and the Logistic Map
Hazem Mohammad Al-Najjar
Abstract: In this paper, we propose a new image encryption algorithm based on chaos theory. Our Approach depends on creating combinations between two adjacent pixels; to create linear independence relationships in the same row. Where, the keys will be stored in the first column in an encrypted manner by using a logistic map with initial condition known as Key1. After that, we used another key called Key2 for the logistic map; to change the position of the pixels by shuffling the pixels position. However, by analyzing our algorithm, we show that it’s strong against different types of attacks and it’s sensitive to the initial conditions.
Image Encryption Algorithm Based on Logistic Map and Pixel Mapping Table
Hazem Mohammad Al-Najjar and Asem Mohammad AL-Najjar
Abstract: In this paper, we propose a new image encryption algorithm based on logistic map chaotic function. Our algorithm consists of two replacement approaches; to change the value of the pixel without shuffling the image itself. To do that, we suggest using a Pixel Mapping Table (PMT) with the random shifting value to increase the uncertainty of the image. After that, we modified the pixels value by using the rows and columns replacement approach. However, by analyzing our algorithm, we show that it’s strong against different types of attacks and it’s sensitive to the initial conditions.
Neural Network-Based Face Detection with Partial Face Pattern
Sinan Naji1, Roziati Zainuddin, Hamid A. Jallb, Masoud Abou Zaid, and Amar Eldouber
Abstract: In this paper, we present a neural networkbased method to detect frontal faces in grayscale images under unconstrained scene conditions such as the presence of complex background and uncontrolled illumination. The system is composed of two stages: threshold-based segmentation and neural network-based classifier. Image segmentation using thresholding is used to reduce the search space. Artificial neural network classifier would then be applied only to regions of the image which are marked as candidate face regions. The ANN classification phase crops small windows of an image, and decides whether each window contains a face. Partial face template is used instead of the whole face to make training process easier. To minimize the probability of misrecognition, texture descriptors such as mean, standard deviation, smoothness and X-Y-Relieves are measured and entered besides the image as input data to form solid feature vector. The ANN training phase is designed to be general with minimum customization and to output the presence or absence of a face (i.e. face or non-face). In this work, partial face template is used instead of the whole face. Aligning faces is done using only one point that is “face center”.
Integration of a Bayesian Learning Model in a Multidocument Summarization System
Maher Jaoua, Lamia Hadrich Belguith, Fatma Kallel Jaoua, and Adelmajid Ben Hamadou
Abstract: We propose in this paper, a new method for multidocument summarization which is based on a learning model. The learning is particularly used in order to deduce, for a given summarization task, the best combination of criteria allowing to select the best summary (or extract). Thus, its use aims at the realization of a flexible multi-document summarization system which can be applied for various summarization tasks. We chose the application of the naive of Bayes learning algorithm in order to determine a set of extracts that are considered the best (generally there is more than one). So we accompanied this algorithm by a multi-objective classification method allowing to find the best extract. The experimentation of the learning gave encouraging results. Also, the evaluation of the ExtraNews system, implementing our method, gave interesting results.Keywords: Multi-document summarization, abstract, learning, Naive Bayesian model, multi-objective classification, classification criteria.
Development of Graphical User Interface by Applying Philosophy of Use Case Maps
Ebitisam K. Elberkawi, Mohamed M. Elammari
A Taxonomy of Email Spam Filters
Hasan Shojaa Alkahtani, Paul Gardner-Stephen, and Robert Goodwin
Dealing With Web Services Composition at the Architectural Level
Djamal Bennouar, Walid Khaled Hidouci, and Kahdidja Bentlemsan
Developing a Model for Generating Computer Program from Semi Natural Arabic Language
An Approach Automata Game of Life-MAS of Segmentation MRI Brain
Benmazou Sarah, Layachi Soumia, Merouani Hayet Farida
Evolution Framework for Software Architecture Using Graph Transformation Approach
Abdelkrim Amirat, Ahcene Menasria, and Nouredine Gasmallah
Abstract: This paper presents a graph transformation approach to software architecture evolution. Evolution is inevitable over the course of the complete life of complex software intensive systems and more importantly of entire product families. However, not only instance models, but also type models and entire modelling languages are subject to change. Obviously software architecture is the centerpiece of software systems and acts as reference point for many development activities, and few of today's software systems are built to accommodate evolution. Evolution is primarily reflected in and facilitated through the software architecture. In this paper we focus on the different dimensions of architecture evolution with an automated evolution process of software architecture using graph transformation. The rules for the potential architecture evolutions operators are defined using AToM3 graph transformation tool.
Searching Concepts and Keywords in the Holy Quran
Ahmad T. Al-Taani and Alaa M. Al-Gharaibeh
Abstract: The Arabic language is one of the oldest languages in the world that presents its own features and challenges while searching for Arabic-based content. The most search systems for the Holy Quran is organized around words (contained in the target verses) rather than the concepts which those words denote. As a word denote many concepts (polysemy) and a concept can be denoted by many words (synonyms). In this study, we present a methodology to search about concepts and keywords in the Holy Quran while improving the chances of finding all desired verses. Three approaches are applied; text-based, stembased, and synonyms-based system. These three approaches are compared with respect to precision. The text-based approach is based on a full word, the synonyms-based approach is based on the synonyms of the word, and the stem-based approach is based on the stem of the word. We apply the light stemming algorithm to find the stem of the word.Keywords: Information Retrieval, Stem-based system, Text-based system, Synonyms-based system, the Holly Quran.
The Employment of Axiomatic Design in Software Engineering: A Software Development Conceptual Framework
Basem El-Haik and Adnan Shaout
Abstract: Software permeates in every corner of our daily life. Software and computers are playing central roles in all industries and modern life technologies including cyber security to enable organizations to practice safe security techniques and minimize the number of successful cyber security attacks. In manufacturing, software controls manufacturing equipment, manufacturing systems, and the operation of the manufacturing enterprise. At the same time, the development of software and IT technologies can be the bottleneck in development of organizations and systems, since current software development is full of uncertainties, especially when new products are designed. Software Design for Six Sigma (Soft DFSS) is a methodology that was proposed by El-Haik & Shaout  to tackle the shortcomings of current software development practices. The goals of software DFSS is twofold: first to enhance algorithmic efficiency so as to reduce execution time and to enhance productivity so as to reduce the coding, extension, and maintenance effort. As computer hardware rapidly evolves and the need for large-scale software systems grows, productivity is increasingly more important in software engineering. The so-called “software crisis” is closely tied to productivity of software development . Axiomatic Design is a methodology that was suggested as a conceptual framework for software development within Soft DFSS. This paper introduces Software DFSS and Axiomatic Design methodology and shows thru a case study the employment of Axiomatic Design as a software conceptual development engine within Soft DFSS.Keywords: Axiomatic Design, Software Complexity, Six Sigma, Software DFSS, Software Engineering, Software Development, Software Concept, Software Design
A Framework for Software Product Risk Management Based on Quality Attributes and Operational Life Cycle
Halima M. Mofleh and Ammar Zahary
Abstract: This paper presents our framework that tries to improve software product risk management by applying some sequential processes during operational life cycle of the product. The framework is called SPRMQ (a framework for Software Product Risk Management based on Quality attributes and operational life cycle) which attempts to manage software product risk. SPRMQ consists of four processes used to manage risk of software product: identifying risk factors, analyzing risk probabilities and its effects on product quality, risk mitigation, and risk monitoring. SPRMQ uses brainstorming to identify risk factors and probability/impact approach with some modifications to analyze risk based on quality attributes: Functionality, Reliability, Performance, Efficiency, and Maintainability. To mitigate risk, SPRMQ uses three strategies: avoidance, minimization, and contingency. If risk cannot be acceptable SPRMQ framework uses avoidance strategy. If risk can be acceptable and can be reduced, SPRMQ uses minimization strategy, but if risk cannot be reduced, SPRMQ uses contingency strategy. SPRMQ framework is applied to Admission and Registration System (ARS) of University of Science and Technology (UST). Testing results show that using SPRMQ, project/risk manager can effectively manage a software product by measuring the impact of risks that may happen in three risk levels: high, medium, or low.Keywords: Software Product Risk Management (SPRMQ), Risk Probabilities, Identifying Risk Factors, Risk Mitigation, Software Product Quality, Risk Monitoring, Mitigation Strategies.
Intrusion Inspector and Detector in Local Area Network using SNMP and MMC Techniques (IIDLAN)
Salah Alawi and Ammar Zahary
Abstract: Intrusion Detection System (IDS) is one of the tools that help detecting any attack or any intrusion in a network. During the monitoring of a LAN, the intrusion entered to the network can be decreased. However, we need some tools that help us finding the intrusion in the network. This paper presents our approach IIDLAN which is a compound techniques that improve the protection of data such a Microsoft Management Console (MMC) as a techniques of event viewer to monitor any intrusion on the workstation, and the simple network management protocol (SNMP) that assist managing the performance, bandwidth, and plane of the network. Our approach IIDLAN aims to raise the protection of a LAN that is conducted with misuse or anomaly intrusion. Using SNMP alone cannot manage every event on the workstation and using MMC alone cannot manage the activities of LANs individually and with each other. The integration of SNMP and MMC within the approach IIDLAN makes the security of the network higher due to the existence of more two monitoring tools. Results show the performance of the network in terms of data transmission rate, time latency, utilization using SNMP technology, and many aspects of monitoring the networks components and devices using MMC tool.Abstract: Intrusion Detection System (IDS), Simple Network Management Protocol (SNMP), Microsoft Management Console (MMC), Event Viewer, IIDLAN, Linksys-05.
Analyzing Web Service Interaction Using Open ECATNets
Latreche Fateh, Sebih Hacene, and Belala Faiza
Abstract: Current technology and description languages related to the SOA (Service Oriented Architecture) paradigm give limited support to formally analyze web service compositions. Consequently, several works have been done to tackle and investigate the composition process. In this work we propose using Open ECATNets, a sound combination of Open nets and ECATNets (Open Extended Concurrent Algebraic Term Nets) to model compactly and soundly the internal logic and message exchange behavior among peer services. Thanks to this model we do not only obtain a high level specification of service choreographies, but we are also able to formally reason on it.Keywords: Interaction analysis, Web services, Open ECATNets
Email Spam Related Issues and Methods of Controlling Used By ISPs in Saudi Arabia
Hasan Shojaa Alkahtani, Robert Goodwin, and Paul Gardner-Stephen
Abstract: This paper presents the results of a survey of ISPs in Saudi Arabia about email SPAM and how they deal with it. We have surveyed all ISPs in Saudi Arabia and we have received 11 responses from 27 ISPs. This survey investigated the nature of email SPAM, its volume, its types and its sources in Saudi Arabia. It also investigated the effects of email SPAM on operating ISPs. This survey aimed to understand the efforts of government and ISPs to control SPAM. Finally, this survey aimed to assess the effectiveness of current filters in detecting Arabic and English SPAM. The results showed that there was a large volume of SPAM in Saudi Arabia and this volume varied from organization to organization. The results showed that the major of languages of SPAM were Arabic and English and that these Arabic and English SPAM emails have different types and were sent from different sources in the world. The results showed that the email SPAM affected the operation of the ISPs. The survey also showed that the effectiveness of current filters varied from method to method in detecting Arabic and English SPAM emails. Finally, the results showed that some of the ISPs were not aware of government efforts to combat SPAM while others said that there were efforts by the government and they contributed to these efforts. The results also showed that ISPs made attempts to control SPAM such as implementing and updating filters and informing customers about SPAM.Keywords: SPAM, Arabic, filters, email, ISPs, English.
Motion Control of non-Holonomic Mobile Manipulator using Fuzzy Logic.
Abdelouahab Hassam and Miloud Hamani
Abstract: In this paper, a new methodology for motion control of mobile manipulator is presented. Motion control of mobile manipulator addresses the problem of trajectory tracking in autonomous mode witch allowing the endeffector and the plat-form to follow simultaneously desired trajectories without violating the non-holonomic constraints. Moving mobile manipulator systems presents many problems that are due to the coupling of holonomic manipulators with non-holonomic bases. In this way, we consider that the platform has a slow and imprecise dynamic response, so we propose to use a fuzzy kinematics control, conversely the manipulator has a rapid and precise dynamic response, a fuzzy controller optimised by a genetic algorithm is proposed for this reason. The simulation results of a car-like plat-form equipped with two link manipulator are given to show the effectiveness of the proposed method.Keywords: Fuzzy Logic, Genetic Algorithms, Motion Control, Optimization, Trajectory Tracking.
A Multi agent based Tool for the Simulation of the Dynamics of a Bovine Population under the Effect of Viral infections
Tahar Guerram and Nour El Houda Dehimi
Abstract: Multi agent systems  is an approach which allows to study population dynamics by defining the attributes and the behaviors for the interacting individuals of the system. So, multi agent systems allow the design and the implementation of the individual models proposed for the study of population dynamics which have major advantages compared to the aggregate models. The present paper presents a multi agent based framework for the simulation of the impact of viral infections on a population of cows which may be modern (only females) or mixed. This simulation creates an artificial life   which will make it possible to the user to feel in a virtual laboratory and will facilitate to him the forecasts of the impact of viral infections on the evolution of the targeted population.Keywords: multi agent systems, viral infection, modeling and simulation, population dynamics, artificial life.
The Implementation of Face Security for Authentication Implemented on Mobile Phone
Emir Kremić and Abdulhamit Subaşi
Abstract: In this paper we are presenting the face recognition security for mobile phones. The model which has been applied for face recognition is Eigenface. The implementation consists from two parts: MATLAB and Droid Emulator for ANDROID mobile phones. The proposed implementation model has come as an idea, since today’s mobile phones are computers in medium. We run our e-mails, agendas, storing data, using it for financial applications for viewing stock markets etc., and we would like to provide the approach of security model which will be based on face recognition as a biometric approach for authentication on mobile phones. Due the PIN vulnerability, as the most used mobile phone authentication mechanism we are presenting the approach which will enable a new level of mobile phone user’s security. This has been tested with the database which consists from many images of facial expression. The algorithm which was implemented for mobile face recognition on MATLAB side is PCA. Limited with hardware capabilities we made of substitution between accuracy and computation complexity on the application. Proliferation of application and data has aim to increase the user need to protect the data which exist in mobile devices.Keywords: Face Recognition, PCA, MATLAB, Droid, Authentication
Automatic Parallel Processing Environment
Abstract: Most of our universities contain a large number of computer’s laboratories with the latest hardware and software, the actual use of these laboratories are no more than 50% of thier capacities, where these laboratories are not used in the night hours (from eight at night until eight in the morning). Most of these computers are modern (dual-core or quad-core), and are connected by local area network (LAN). We have at Al Jouf University a lot of modern laboratories that have been installed contains modern computers (quad-core). The idea is to use all connected computers in the laboratory as a single virtual supercomputer (cluster) that saves millions of dollars to the university. In our paper we have designed an Automatic Parallel Environment (APE) to offers the possibility for normal users to use cluster without any programming knowledge background. The APE was tested on a laboratory contains quad-core computers connected by LAN, and also tested on wireless home network with four different computers connected via DSL. There are several reasons for building APE, including:
· Availability of computers in abundance in the offices, universities and homes.
· Availability of wired and wireless networks everywhere.
· The need for high speed computing in fields like physics, mathematics and artificial intelligence.
· Lack of familiarity of programming languages such as C, Java, and with programming languages and tools of parallel processing like MPI, PVM.
· The spread of multi-core processors in all types of computers, including desktop PCs and mobile and pocket PCs.
The APE came to help non-specialist non programmer users to be able to benefit from idle computers and use them as high speed virtual supercomputer, without need for programming knowledge and without knowing any concept of parallel processing and networking (parallel processing transparency). Also APE provided programmer interface for professionals to use the environment and they can also contribute by adding libraries to this environment.Keywords: Distribute Tasks, distributed objects, multi-core, network, laboratories, transparency.
Template Matching Method for Recognition Musnad Characters Based on Correlation Analysis
Mohammed Ali Qatran
Abstract: Because of the historical significance of the Musnad alphabet, which is considered the basic of the modern Arabic language, the recognition of Musnad characters is necessary to study the history of Arabic language development stages. Much research has been done on the identification of English, Arabic, Japanese, Chinese, and Korean characters. It is observed that the research on recognition of Musnad characters is still an open research problem. In this paper, an attempt has been made to develop a simple and efficient method for the recognition of Musnad characters. The present method is based on template matching where a character is identified by analyzing its shape and comparing its features that distinguish each character. Experimental results show the relatively high accuracy of the developed method when it is tested on all size characters.Keywords: Correlation analysis, Handwritten Character Recognition (HCR), Template matching, 2-D Correlation coefficient.
Node Selection Based on Energy Consumption in MANET
Jailani Kadir, Osman Ghazali, Mohamed Firdhous, and Suhaidi Hassan
Abstract: In a Mobile Ad hoc Network, the mobility of a node is unpredictable. The mobility is considered as one of the characteristics of a wireless network. In addition, the energy constraint of the nodes must also be taken into consideration when designing routing protocols. This is an important issue since energy consumption reduces the wireless network connection lifetime. The nodes in MANET are fitted with batterie with limited capacities. In order to achieve an optimum route connection by extending the network lifetime, the distance factor of the source-intermediate-destination needs to be combined with the initial energy of the node when selecting a participating node in a route path. A probability based node selection method is proposed in this paper for identifying the intermediate node with optimum stored energy that could withstand through duration of connection. The algorithm has been tested with simulations and it has been proved that the node with the largest probability consumes the lowest energy. This not only helps to sustain the communication with the lowest chance of interruption, but also prolongs the network lifetime due to the lowest possible consumption of energy for a given communication.Keywords: Energy constraint, energy consumption, node selection, MANET.
A New Classification of Non-Functional Requirements for Service-Oriented Software Engineering
Yousra Odeh and Mohammed Odeh
Abstract: The service-oriented model of computing is increasingly becoming the mainstream for developing complex software systems and in particular highly distributed and web-based systems. However, the classification and the specification of non-functional requirements (NFRs) for software services and also for service-oriented systems have not been addressed to the level that NFRs’ classification has been attempted for non service-oriented systems. In this paper, we introduce a new framework for classifying non-functional requirements in relation to engineering software services and service-oriented systems. In addition, this new classification is anticipated to be of significant contribution in facilitating the identification and specification of NFRs for service engineering and service-oriented.
The Cultural Aspects of Design Jordanian Websites: An Empirical Evaluation of University, News, and Government Website by Different User Groups
Rasha H. O. Tolba
Abstract: This study attempts to describe a comparative study of evaluate a number of Jordanian local websites used in Jordan. The purpose of the study was to determine the different kind of “cultural markers” that influence user’s website usability. Furthermore, this study attempts to identify Jordanian cultural sensitive design elements to use for culturallycentered Jordanian website design. In addition to that, this paper studied the effect of both cultural dimensions and users interface components on users acceptance using Technology Acceptance Model (TAM). The outcome shows similar preference perceptions, whilst others differ.Keywords: culturability, TAM, website localization, cultural usability, metaphors, mental model, navigation, interaction, appearance, Hofestede cultural dimensions, ease of use.
Force / Motion Control for Constrained Robot Manipulator Using Adaptive Neural Fuzzy Inference System (ANFIS)
K. H. Hassan Al-maliki, W.A.Wali, Hameed L. Jaber, and Turky Y. Abdullah
Abstract: This paper presents an Adaptive Neural Fuzzy Inference System (ANFIS) control for constrained robot manipulator to compensate the uncertainties in the robot dynamics. ANFIS is trained online based on computed torque method on the dynamic system to survey a desired both force and motion trajectory. The mathematical equations of the control law and closed loop errors are derived. Based on the derived equations the ANFIS is used to identify the dynamic parameters of constrained robot. The simulation is carried out using a two link constrained robot. Simulation results show that the trajectory tracking error of motion and force are converged with good accuracy.Keywords: Constrained Robot, Force / Motion Control, ANFIS
An Overview of Flow-Based and Packet-Based Intrusion
Detection Performance in High Speed Networks
Hashem Alaidaros, Massudi Mahmuddin1, and Ali Al Mazari
Abstract: Network Intrusion Detection Systems (NIDSs) are widely-deployed security tools for detecting cyber-attacks and activities conducted by intruders for observing network traffics. With the increase in network speed and number and types of attacks, existing NIDSs, face challenges of capturing every packet to compare them to malicious signatures. These challenges will impact on the efficiency of NIDSs, mainly the performance and accuracy power. This paper presents an overview of how the performance of the Payload-based and Flow-based NIDSs is affected by the threats and attacks within the high-speed networks environment. The impact of these new technologies on the NIDSs will be described in terms of the NIDSs performance and accuracy. Throughout the analysis of the literature on this topic, we found that the Packet-based NIDSs process every packet (payload) received. While it produces low false alarms, it is very time consuming, therefore it is hard, or even impossible, to perform packet-based approach at the speed of multiple Gigabits per second (Gbps). Flow-based NIDSs have an overall lower amount of data to be process, therefore it is the logical choice for high speed networks but it suffers from producing high false alarms. Therefore, it can be recommended that, a hybrid or a mixture model of both NIDSs may ensure a higher ability to react on the wider scope of attacks within the high-speed networks environment.Keywords: Network Intrusion detection, packet-based, flow-based, high speed networks, efficiency
An Algebraic Algorithm for the Integrated Inventory Transportation Supply Chain Problem
Mohammed E. Seliaman
Abstract: In this paper, we develop a three-stage, serial supply chain production inventory model with the integration of transportation cost. This supply chain model is formulated for the integer multipliers coordination mechanism, where firms at the same stage of the supply chain use the same cycle time and the cycle time at each stage is an integer multiplier of the cycle time used at the adjacent downstream stage. We develop an optimal replenishment policy using a simple algebraic procedure to solve the problem without the use of differential calculus.Keywords: Algebraic Algorithm, supply chain management, inventory model.
An Intelligent Approach for Dos Attacks Detection
Amr Hassan Yassin and Hany Hamdy Hussien
Abstract: The purpose of this work is to get an enhanced detection approach for the Denial of Service (DoS) attacks intrusion problem done in a particular network. An achievable optimized model of neural network is presented for the proposed detection system. The data used in training and testing is the data collected by the common packet analyzer Tcpdump. This RBF-NN model can be used as a general classifier for several types of attacking methodsKeywords: Intrusion detection, Neural Network, Radial Basis Function, and Denial of service.
Design and Implementation of RFID-Sensor Middleware Compliant with EPCglobal Architecture Framework
Md. Kafil Uddin and Bonghee Hong
Abstract: This paper proposes a new RFID-Sensor Middleware System that is compliant with EPCglobal framework. It uses extended APIs for sensor tag management. In this paper, we describe the unique features of our middleware, its typical usage and its implementation.Keywords: Software Sensor tag; Active RFID; Middleware.
A New Architecture for Translation Engine Using
Ontology: One Step Ahead
Abstract: Usually, translation process needs external information to help generating the accurate result of the target text. Analyzing an input sequence in order to determine its grammatical structure with respect to a given formal grammar is considered as a parsing procedure (Bataineh & Bataine, 2009). The main idea of the proposed architecture is to utilize the WordNet ontology to be the syntactic guide along with the Transition Network Grammar to determine the grammatical structure for the text to be translated. This paper is an open research which is having ongoing results and developments. The main architecture is described in this paper to open the door for several future steps for further integration with other techniques and approaches.Keywords: Translation, WordNet, Transition Network Grammars, mapping engine
Towards A New Way for Aspect-Oriented Software Programming
VisPLAJ, a Pedagogic Visual Programming Language for AspectJ
Sassi Bentrad and Djamel Meslati
Abstract: Visual programming languages (VPLs) represent quite the biggest departure from traditional programming approaches. However the last twenty years have seen quite remarkable progress in this field. While various visual programming tools have been proposed, there is a number already in the market and others are still in the research prototype stage, and it is difficult to predict the suitability of its usage for the real world applications. Their success is largely limited to pecialised programming domains moreover there has been less success in more general programming applications. This paper offers a preview of our research project which we aim to evolve a new way for Aspect-Oriented Software Programming: Visual Aspect-Oriented Programming (VAOP) and to develop an educational language for teacher and student of this programming paradigm. Here, we present the current state of the art and emerging research in combining visual with aspect-oriented and object-oriented programming. It highlights some of the basic concepts of visual aspect and objectoriented programming, its classification, current research trends and the benefits gained from using it. This manuscript can be used to help researchers identify fruitful topics of future novice programming research.Keywords: Visual Programming, Visual Language, Visual Aspect-Oriented Language, Aspect-Oriented Software Programming, Novice Programming.
A Survey of Indexing Techniques in Natives Xml Databases
Imen Zemmar, Abdallah Benouareth, and Labiba Souici-Meslati
Abstract: With the huge increase of XML documents on the Web, indexing, storing and retrieving these documents is of a great concern. Indexing and retrieving XML documents has recently become an active research area because they allow a convenient access to XML document parts. Several methods have been proposed for indexing XML documents, we can find two categories, those emanating from the database community and those arising from the information retrieval community. This article aims to present an overview of different indexing techniques in native XML databases, classifying them into categories according to their common features and comparing them to find which one is the most suitable for the new issue of semi-structured information retrieval.Keywords: Semi-structured documents, Semi-structure Information retrieval, XML indexing.
Palm Vein Identification Using Radon Transform: RBF Approach
Alaa aboud, Ahmad khatoun, Samer Chantaf, Rola El Saleh, Mohammad Ayache, and Amine Nait Ali
Abstract: In this paper, a new biometric method based on palm veins recognition is developed. The main goal is to study the possibility of a contactless identification of individuals by using a sequence of palm veins images captured by a camera in the near infrared. This method is based on radon transform for features extraction and radial basis function RBF for classification and identification. This approach shows that the palm veins pattern is unique for each individual and encouraged result shown the good performance of the radon transform and the RBF network for classification.Keywords: Palm vein, Radon Transform, Neural Network, RBF.
On Finding the Best Number of States for a HMM-Based Offline Arabic Word Recognition System
Talaat M. Wahbi, Mohamed E. M. Musa, and Izzaldin M. Osman
Abstract: This paper describes a method to recognize off-line handwritten Arabic names. The classification approach is based on Hidden Markov models. We use a data set of 20 Arabic names. The system was trained using 2000 Hand written names. A separate set of 1000 names has been used for testing. For each name five HMM models with different number of states 3, 4, 5, 6, and 7 states have been trained. The experiments have shown that the best number of states for different names is different.
Decision Trees for Handwritten Arabic Words Recognition
Siham Amrouch, Aida Chefrour, and Labiba Souici-Meslati
Abstract: In this paper, we present a system based on decision trees for the off-line recognition of handwritten Arabic words. The aim of this work is to design and implement a system for the recognition of Algerian city names (wilayas), based on the symbolic learning decision tree approach. After the acquisition step, images are preprocessed and structural features (subwords, loops, ascenders, descenders and diacritical dots) are extracted. These features, combined with the corresponding classes are presented as input to a learning process which gives as a result a decision tree that can be used for the classification step in our recognition system. The resulting tree can be expressed more explicitly as a rule base for words classification. These rules are not based on theoretical information, but on training samples. Our experimental recognition results are encouraging and confirm our expectation that the use of structural features and symbolic learning is an interesting issue of wholistic handwritten words recognition.Keywords: Arabic handwriting recognition, decision tree, structural features, C4.5, Weka
Gait Analysis for Criminal Identification Based on
Nor Shahidayah Razali and Azizah Abdul Manaf
Abstract: The need for criminal identification is becoming ever more critical with the increasing number of crime occurred in our vast society. As numerous identification techniques are being used for many civilian and forensic applications, gait and its features have evoked considerable interest. The literature confirmed that accurate results can be found in gait identification using motion capture approach, especially in the critical area of clinical and sport application. By using motion capture technique, some difficulties in acquiring precise data can be avoided as the motion capture system captured the exact coordinate of each body points. In this research, a new mechanism of criminal identification is proposed by using person’s gait as a case study. The system was developed based on normalization method and Principle Component Analysis (PCA) which optimizing the features extracted from the gait motion data. The principle components obtained were then being matched using Euclidean Distance method between the sample and suspect data. Results from the experiments show that the proposed system is capable in identifying the person based on their gait by matching the sample and suspect motion files and presented the most probable person as a strong evidence for criminal identification.Keywords: Gait Identification, PCA, Motion Capture, Euclidean Distance.
Data Hiding Technique Based On Dynamic LSB
Naziha M. AL- Aidroos, Marghny H. Mohamed, and Mohamed A. Bamatraf
Abstract: In this paper, we propose steganographic technique on images to provide higher capacity of secret information as well as imperceptibility of stego image for secret communication. To cover and recover hidden information, spatial domain approach of the image is used. The principle behind the proposed method is to increment the embedding capacity and with minimal effect in the stego image aiming high imperceptibility based on the simple LSB. The proposed technique uses variable number of LSB’s rather than fixed, for more efficiency the data hiding process selects set of edge pixels using pixel value difference (PVD) to minimize visual effects over the stego image. The experimental results show efficient performance of the proposed method compared to similar methods in the same domain, in terms of PSNR and the capacity in addition to visual effects. Efficiency of the model is evaluated from the viewpoint of both the insertion amount and the visual effects on the cover image (i.e. image quality). Moreover, the variable number of LSB’s improves the resistance against image steganalysis.Keywords: Steganography, LSB Substitution, Pixel Value Difference, Data Hiding.
Towards The Use of Program Slicing In the Change
Impact Analysis of Aspect Oriented Programs
Imad Bouteraa and Nora Bounour
Abstract: Change impact analysis plays a crucial role in software maintenance; it can determine the effect of a change in an entity on the other entities of software. Several techniques of impact analysis for various paradigms were proposed in the literature. But little of them treat this problem in aspect-oriented programs. In this paper we propose a new approach for change impact analysis in aspect oriented programs. In order to get accurate results we use program slicing, a program analysis technique that explore existing dependencies in software source code.Keywords: change impact analysis, aspect-oriented programing, program slicing.
Risk-Driven Compliant Access Controls for Clouds
Hanene Boussi Rahmouni, Kamran Munir, Mohammed Odeh, and Richard McClatchey
Abstract: There is widespread agreement that Cloud computing has proven cost cutting and agility benefits. However, security and regulatory compliance issues are continuing to challenge the wide acceptance of such technology both from social and commercial stakeholders. An important factor behind this is the fact that Clouds, and in particular public Clouds, are usually deployed and used within broad geographical or even international domains. This implies that the exchange of private and other protected data within the Cloud environment would be governed by multiple jurisdictions. These jurisdictions have a great degree of harmonisation, however, they present possible conflicts that are difficult to negotiate at run time. So far, important efforts have been taken in order to deal with regulatory compliance management for large distributed systems. However, measurable solutions are required for the context of Cloud. In this position paper, we propose an approach that starts with a conceptual model of explicit regulatory requirements for exchanging private data on a multijurisdictional environment and build on it in order to define metrics for noncompliance or risks to compliance. These metrics will be integrated within usual data access-control policies and will be checked at policy analysis time before a decision to allow/deny the data access is made.Keywords: Cloud, Privacy, Data Access, Semantic Web, Requirements Engineering.
A Multi Media Chaos-Based Encryption Algorithm
Brahim Boulebtateche, Mohamed Mourad Lafifi, and Salah Bensaoula
Abstract: In digitally modern world, the fundamental issue of multimedia data security (such as digital audio signals, images, and videos) is becoming a major concern due to the rapid development of digital communications and networking technologies. Traditional encryption schemes such as DES (Data Encryption Standard) and its equivalent perform poorly for multimedia data because of the large data size and high redundancy. Chaotic systems are extremely sensitive to control parameters and initial conditions. This feature has been found very effective in the field of cryptography. In this paper, a computer based system of encryption using chaotic time series is described. The system is used for encrypting audio and image files for the purpose of providing secure data bases and/or sending sensitive multimedia data over open networks (such as Internet). It uses data sorting and circular shifting processes in 2-D direction according to some secret computer-generated sequence of chaotic ordered numbers. Several experimental results on audio and image data encryption and decryption, key sensitivity tests, and statistical analysis show that the proposed approach for multimedia cryptosystems performs efficiently and can be applied for secure real time encryption and safe transmission of confidential data.
A Particle Swarm Algorithm for Solving the Maximum
Abstract: The Maximum Satisfiability problem (Max-SAT) is one of the most known variants of satisfiability problem. The objective is to find the best assignment for a set of Boolean variables that gives the maximum of verified clauses in a Boolean formula. Unfortunately, this problem was showed NP-complete if the number of variable per clause is higher than 3. In this paper we investigate the use of particle swarm optimization principles to resolve the. The underlying idea is to harness the optimization capabilities of PSO algorithm to achieve good quality solutions for Max-SAT problem. To foster the process, a local search has been used. The second great feature of our approach is the use of an adaptive objective function based on the clause weights. The obtained results are very encouraging and show the feasibility and effectiveness of the proposed hybrid approach.
Keywords: Maximum Satisfiabilty problem, Particle Swarm Optimization, Local Search.