Tengku Mohd Sembok, Mohammed Abu Shugier
Machine Translation (MT) has been defined as the process that utilizes computer software to translate text from one natural language to another. This definition involves accounting for the grammatical structure of each language and using rules and grammars to transfer the grammatical structure of the source language (SL) into the target language (TL). Word agreement and ordering play an important part in the process. MT should handle agreement between the subject and verb where the number, gender, person and features of the subject are
important factors in the derivation of the verb as well as the features of the verb itself. Other agreements are required between the adjectives and the nouns where Arabic adjectives depend on the number, gender and person as well as the definiteness and indefiniteness of the nouns. Some other agreements also exist between the numbers and the countable nouns. The paper represents a rule-based approach in English to Arabic MT and emphasis is given in handling of word agreement and ordering. The methodology is flexible and scalable, the main advantages are: first, it is a rule-based approach, and second, it can be applied on some other languages with minor modifications.
DR NASSIR H. SALMAN
The idea is designing and programming a new image processing toolbox using Matlab codes. This toolbox is used as an education tools to process digital images and help the students to understand how are the different image processing functions work?; such as how the image for any format and size is opened?, how are the slice images combining to get a movie image as a video camera did?, displaying the original images and their results after processing in the same window for compression purposes , show image histogram, running watershed segmentation method, enhancing, threshoding ,separate color image into its components , adding Gaussian and salt and fever noises to the image then doing successive filtering process , and plotting any signal data profile. Also in our package, it is easy to connect with all Matlab functions and using all Matlab dialog boxes designs. Finally all the functions in this toolbox are collected and programmed using Matlab codes as we see through the text of this paper. The toolbox is easy to use in image processing field.
Keywords: digital image processing toolbox , Filter, image enhance , Matlab codes image segmentation.
Dr. Hedaya Alasooly
In this paper, I would like to put some of my knowledge in hacking. The paper talks about many of the hacking tools and strategies existing today and covers mainly the following: Certified Ethical Hacking Course, Stealers, Keyloggers, Trojans, Web Downloaders,
Sending the patch to the victim, Fake pages pages, Some other sites fake pages, Using anonymous email, Hacking remote computer, Scanners, Email read notification and finding the email address of email sender, Checking Yahoo and MSN block and delete, Opening the webcam without the person permission, Email read notification and finding the ip address of email sender, Using anonymous proxy, Finding information about remote system, Using anonymous proxy, The credit card hacking, Scanning the website with vulnerability security scanners and getting the suitable exploits, Some examples attacking websites and using the exploits. for hacking emails, Some other sites fake.
Keywords: Certified ethical hacking, Trojans, Fake pages, Scanners, Hacking web sites.
Mohammed. A. Al-Manie, Mohammed I. Alkanhal and Mansour M. Al-Ghamadi
In this paper, a special algorithm is developed to accomplish segmentation for the Arabic speech based on energy level of the uttered words or sentences. In Arabic language, phonemes can be divided into two energy regions: unvoiced phonemes are categorized as low energy, for example, the sounds / س / (/s/) and ه / / (/h/); On the other hand, vowels and semi-vowels such as / َ / ( فتحه ) (/ 'a /) and / و / (/w/) are categorized as high energy. Also included in this category are voiced fricatives; for instance the sounds
ز/ / (/z/) and / ع/ (/aa/). The Arabic Phonetic Database (KAPD) developed by the Computer and Electronics Research Institute at KACST is utilized to perform the automatic segmentation of speech and test the developed algorithm. Keywords: Automatic speech segmentation, voiced phonemes, voiceless phonemes, low energy detection, high energy detection.
Ali Hamza A, Al-Helali Adnan H. M. and Al-Nsour Ayman. J.
Elliptic Curves have been used to factorize numbers and prime numbers proofs as well as cryptography. They may be used to create digital signature and construct key exchange protocols, too. A comprehensive end-to-end key agreement and authentication protocol based on Elliptic Curve Cryptosystem (ECC) may be adopted to satisfy the security requirements in virtual class application. This paper adapts an ECC protocol to provide enhanced virtual class security performance in terms of computation efficiency, bandwidth and storage. It provides student identity recognition, confidentiality, non-repudiation and new keys for each session. This application is anticipated as would be beneficial for most available appliance, such as PC's, laptops as well as mobile phones, smart phone
using XML based wireless services, personal data assistants (PDAs), secure web connections over the wired networks.
KEYWORDS: Elliptic curves, Cryptography, Authentication and digital signature, Mobile applications and Virtual class.
SAFAA SAHIB OMRAN, RAAED FALEH HASSAN, and SIRAJ QAYS MAHDI
A Multi-Agent System (MAS) is a method for providing clinical decision support to health care practitioners in remote areas from specialist doctors in central hospital because of inadequate number of available specialist doctors. The (MAS) can be used to help the patients at home whom need a long time for treatment plan by connected them with specialist doctors in hospital through network. Therefore, a software called bee-gent is a multi-agent being designed and implemented to provide a realistic solution by making two kind of functions. First, it will assist the patient in managing continuing ambulatory conditions ,for example chronic problems such as diabetes, special normal conditions such as prenatal care, and wellness issues such as diet, exercise, and stress. Second, it will provide health-related information by allowing the patient or health care practitioner to interact on-line with specialist doctors in hospital. In
this paper a Bee-gent is designed and implemented by using visual c++ language by connecting the client on-line with server and update the information on-line between them, after saving it in individual database.
Keywords: mobile agent, database interface, networks, visual c++ language, health care.
Suleiman H. Mustafa and Loai F. Al-Zoua'bi
The aim of this research study was to evaluate the websites of Jordan's universities from the usability perspective. Two online automated tools, namely: html toolbox and web page analyze were used along with a questionnaire directed towards users of these websites. Tools were used to measure the websites internal attributes which can not be perceived by users, such as html code error, download time, and size of html page. The questionnaire was developed and designed based on 23 usability criteria divided into 5 categories. Each category deals with one usability aspect. The results showed that the overall usability level of the studied
Websites is acceptable. However, there are some weaknesses in some aspects of the design, interface, and
performances. Suggestions are provided in the study to enhance the usability of these websites.
Keywords Website Usability, Website Evaluation, University Websites. Jordanian Universities.
Ali Retha Hasoon Al-Moseaway
We are living in a digital world in that extensive information exchange can be performed quickly and easily in digital format over the Internet or different storing devices in a cost-effective way. Digital data, such as music, image, video, text, e-mail, and so forth, are easily copied and transferred without any degradation. Concerns over ownership protection, data protection, and other security issues have therefore arisen. Digital watermarking is a general solution that can be used to identify illegal copying and ownership, authentication, or other applications by inserting information into the digital data in an imperceptible way. In order for a water mark to be useful, it must be perceptually invisible and have robustness against detecting processing and a variety of possible attacks by those who seek to pirate the material. In this paper we embedding watermark (text, icon) in MPEG1 file by using unused space in MPEG1 files to protect file from illegal copy and the same way could be used in secret communication (Steganography).
The information embedded is hidden and the effect on the appearance and function of the work is minimized or invisible. It is supposed to survive even after digitalto- analog conversion, compression, or resizing.
Keywords: Copyright, Watermarking, MPEG1.
Nidhal K. Al abbadi , Nizar Saadi Dahir and Zaid Abd Alkareem
Skin recognition is used in many applications rangingfrom algorithms for face detection, hand gesture analysis, and to objectionable image filtering. In this work a skin recognition system was developed and tested. While many skin segmentation algorithms relay on skin color, our work relies on both skin color and texture features (features derives from the GLCM) to give a better and more efficient
recognition accuracy of skin textures. We used feed forward neural networks to classify input textures images to be skin or non skin textures. The system gave very encouraging results during the neural network generalization face.
Keywords- skin recognition, texture analysis, neural networks.
Marwa Obayya and Fatma Abou-Chadi
In this paper, an attempt was made to utilize the Hilbert-Huang transform (HHT) for the analysis of heart rate variability signals in order to discriminate between normal subjects and patients with low heart rate such as those who suffering from congestive heart failure (CHF) and myocardial infarction (MI). This decomposition method is adaptive and therefore highly efficient. Using the Empirical mode decomposition (EMD) method, HRV signals are decomposed into a finite and often small number of intrinsic mode functions (IMFs) that admit well-behaved Hilbert transforms. The final presentation of the results is an energy-frequency-time distribution, known as the Hilbert spectrum. Then the features were statistically analysed by analysis of variance (ANOVA) test. It has been shown that the use of HHT may prove to be a vital technique for the analysis of heart rate variability signals.
Keywords: Empirical mode decomposition (EMD), intrinsic mode functions (IMFs), analysis of variance (ANOVA), and Hilbert spectrum.
Rafik DJEMILI, Hocine BOUROUBA, and M.C.AMARA KORBA
ABSTRACT: This paper proposes a novel approach that combines statistical models and support vector machines. A hybrid scheme which appropriately incorporates the advantages of both the generative and discriminant model paradigm is described and evaluated. Support vector machines (SVMs) are trained to divide the whole speakers' space into small subsets of speakers within a hierarchical tree structure. During testing a speech token is assigned to its corresponding group and evaluation using gaussian mixture models (GMMs) is then processed. Experimental results show that the proposed method can significantly improve the performance of text independent speaker identification task. We report improvements of up to 50% reduction in identification error rate compared to the baseline statistical model.
KEYWORDS: Speaker Identification- Gaussian Mixture Model (GMM)- Support Vector Machine (SVM)- Hybrid GMM/SVM.
Albermany, Salah A. and Ali, Hamza A.
Databases have inherently vast capabilities, ranging from safe and easy access logical storage to up to date data. This paper is dedicated to explore means and concepts of Role-Based Access Control (RBAC) and decision permission concepts. It focuses mainly on Role Based Access Control and its application to the Databases objects. After presenting a short overview of RBAC, it is implemented on the database objects by allowing authorized users to access the database information and preventing unauthorized users from doing so. This criterion is becoming valuable asset nowadays within all organizations. In addition to the inclusion of decision permission as a new parameter, the concept of authorization rooms is proposed. It has proved to enhance system flexibility and transforms the system from static to dynamic status. Another important objective of this work is to enhance data confidentiality by extracting maximum business value from the Role-Based Access Control. This is achieved using a security protection mechanism that
provides a secure access to data stored in the databases, and prevents any suspicious activities and fraudulent actions by internal employees with intention of revealing highly sensitive information (information theft). It is noticed that the protection is achieved through effective control over the database objects.
KEYWORDS: Role Base Access Control, Database security, Data mining, authentication.
Dr. Hana'a M. Salman
The Biometrics recognition systems act as an efficient method with broad applications in the area of: security access control, personal identification to humancomputer communication. From other hand, some biometrics have only little variation over the population, have large intra-variability over time, or/and are not present in all the population. To fill these gaps, a use of multimodal biometrics is a first choice solution . This paper describes a multibiometrics method for human recognition based on new teacher vector
identified as spectrum eigenface, and spectrum eigenpalm. The proposed combination scheme exploits parallel mode capabilities of the fusion feature vectors in matching level and invokes certain normalization techniques that increase its robustness to variations in
geometry and illumination for face and palmprint. The correlation distance is used as a similarity measure. A threshold value is used to prevent the imposter for being recognized. Experimental results demonstrate the effectiveness of the new method compared to the
unimodal biometrics for spectrum eigenface/eigenpalm.
Keywords: Fusion, Multibiometrics, Recognition, Eigenvector, Spectrum, Correlation.
Dr.Murad Ahmed Ali Taher Al-Absi
The appearance of the computer and its development and widespread in all fields accompanied with the needs for using computers technique & abilities in communication to facilitate the connection process and to simplify the communication devices circuits. It is also to add features to the Computer- Based System which might not appear without the using of computers & digital technique and interface, for example ,if you want to use a powerful processor such as Z80 or microcontroller such as 8051 or 8048 for driving your system, you will face many of obstacles such as connection of RAM,ROM and I/O peripherals to the μ processor or microcontroller, such many problems that you will find will be nothing with a computer. We can say that the computer-based system can inherent all power features of Computer (Storages, Recording, Accounting, in Telecommunications etc.) In this paper we well show the new designed System for controlling and accounting phone calls by using new designed interfacing circuit to PC via LPT port and its advantages over an expensive commercial one.
Keywords: Interface, DTMF, Monitoring, Accounting,Controlling.
Houria Triki and Thiab R. Taha
A higher order nonlinear Schrödinger equation (NLSE) with third- and fourth-order dispersions, cubic-quintic nonlinearities, self steepening, and self-frequency shift effects is considered. This model describes the propagation of femtosecond light pulses in optical fibers. It is to be noted that soliton pulses are used as the information carriers (elementary "bits") to transmit digital signals over
long distances. The complex envelope function ansatz of Li et al.  is adopted to investigate general analytic solitary wave solutions and derive explicit bright and dark solitons for the considered model. Importantly, new analytical dark and bright wave solutions expressed in terms of the model coefficients are found. These exact solutions are useful to understand the mechanism of the
complicated nonlinear physical phenomena which are related to wave propagation in a higher-order nonlinear and dispersive Schrödinger system.
Keywords: nonlinear Schrödinger equation, solitary wave solution, complex amplitude ansatz method.
Amine Moussa, Hoda Maalouf
This paper presents a performance-optimization technique in a structured wireless sensor network that has a circular form and consists of several rings with many clusters. Optimization is done by varying the number of rings and by applying aggregation on the data transferred across the network up to the destination node. The main reason for this study is that the energy-limited sensor nodes may need to operate in time-constrained environments, in which both energy and time must be conserved. In this paper, we analyze the impact of ring density and data aggregation on energy consumption and transfer time. We also find an optimum structure consisting of a certain number of rings, whereby the sensor network operates with low consumption of energy and little transfer time.
Keywords: Wireless sensor network; Performance optimization; Circular network model; Data aggregation; Energy consumption.
ABDUELBASET M. GOWEDER, IBRAHIM A. ALMERHAG and ANES A. ENNAKOA
The Arabic language presents significant challenges to many natural language processing applications. The broken plurals (BP) problem is one of these challenges especially for information retrieval applications. It is difficult to deal with Arabic broken plurals and reduce them to their associated singulars, because no obvious rules exist, and there are no standard stemming algorithms that can process them. This paper attempts to handle the problem of broken plural by developing a method to identify broken plurals in an unvowelised Arabic text and reducing them to their correct singular forms by incorporating the simple broken plural matching approach, with a machine translation system and an English stemmer as a new approach. A set of experiments has been conducted to evaluate the performance of the proposed method using a number of text samples extracted from a large Arabic corpus (AL-Hayat newspaper). The obtained results are analyzed and discussed.
Keywords: Information Retrieval, Machine Translation, Stemming, Arabic Broken Plural, Arabic Morphology.
Eng. RANIA SALAH EL-SAYED ,Prof. MOUSTAFA ABD EL-AZIEM and Dr . MOHAMMAD ALI GOMAA
Digital signature are used in message transmission to verify the identity of the sender and ensure that a message has not been modified after signing. RSA algorithm is extensively used in the popular implementations of Public Key Infrastructures. In this paper, we have develop a new algorithm for generating signature that overcomes the shortcomings of the RSA system (longer processing time & computational overheads ).In addition, the new algorithm achieve high security for digital signature. The performance of the two public key cryptosystem ( RSA and DSS) and the new algorithm has been implemented and compared. The results obtained show that signing and verification operations are faster in the case of using new algorithm than in the case of RSA and DSS . Also it can forbid any one from reaching the sender's message because with the new algorithm an intruder cannot pose the message sent since the sender's private key is unknown for him. Accordingly, the sender can not be impersonated. On the receiver part, the message is verified by using sender's public key and his private key to decrypt the message successfully.
Key words: Cryptography, RSA, DSS, PKC and DSA.
Mohammed S. Khalil, Dzulkifli Muhammad and M.Masroor Ahmed
Biometric security systems are being widely used for ensuring maximum level of safety. In Biometric system, neither the data is uniformly distributed, nor can it be reproduced precisely. Each time it is processed. However, this processed data cannot be used as a password or as a cryptography secret. This paper proposes a novel method to extract fingerprint minutiae features and converting it to a public key using the fuzzy extractor. The public key can be used as a key in a cryptographic application
Keywords: Biometric, Fingerprint, Local feature, Fuzzy extractor, and Cryptography.
Fadila Maouche and Mohamed Benmohamed
The technology of the automatic speech recognition is in full grow, a multitude of algorithms have been developed to improve the performance and robustness of ASR (Automatic Speech Recognition) systems. The most studied methods in recent years are those inspired by nature as genetic algorithms. In this article we'll introduce a system that uses a genetic algorithm and Mel frequency cepstral coefficients (MFCC) for automatic recognition of isolated Arabic words.
Keywords: Automatic speech recognition, genetic algorithm, Arabic language, Mel frequency cepstral coefficients (MFCC), corpora.
Chergui Leila and Benmohammed Mohammed
This paper provides an Arabic off-line handwritten recognition system based on new classifiers: Fuzzy ART network which is a type of neural network. The proposed system employs the Tchebichef geometric moments as features which are novel in the domain of
Arabic recognition system. The Tchebichef moments provides better feature representation capability and improved robustness with respect to image noise, over other types of moments. Our system which is based on an holistic method includes four steps. Preprocessing, containing thinning, normalization and slant detection and correction. Feature extraction, utilizing Tchebichef moments. Training which is done with the Fuzzy ART training algorithm and classification through the Fuzzy ART classifier. The proposed system has been tested using the last set of the IFN/ENIT database, achieving a nearly 78% recognition rate.
Keywords: Handwriting, Arabic recognition, Fuzzy ART network, Thinning algorithms, Tchebichef moments.
Kef Maâmar and Benmohammed Mohammed
The study and the implementation of routing algorithms to ensure the connection of ad hoc networks is hard task to achieve. The environment is dynamic and the topology of network may change frequently. Two distinct approaches tried to apprehend this problem. The first one is based on a flat vision of network, and the second one on an autoorganization structure.
This paper deals with the study of auto-organization solution for ad hoc network and its use to improve routing protocols, through the reduction and the localization of the data control traffic.
Keywords: Auto-organization, Group, Tree-cover, Ad hoc networks, Routing protocol.
Khadoudja Ghanem, Alice caplier and Mohamed-Khireddine Kholladi
presents a new method to estimate the intensity of a human facial expression. Supposing an expression occurring on a face has been recognized among the six universal emotions (joy, disgust, surprise, sadness, anger, fear), the estimation of the expression's intensity is based on the determination of the degree of geometrical deformations of some facial features and on the analysis of several
distances computed on skeletons of expressions. These skeletons are the result of a contour segmentation of facial permanent features (eyes, brows, mouth). The proposed method uses the belief theory for data fusion. The intensity of the recognized
expression is scored on a three-point ordinal scale: "low intensity", "medium intensity" or " high intensity". Experiments on a great number of images validate our method and give good estimation for facial expression intensity. We have implemented and tested the method on Joy, Surprise and Disgust and now we implement the same method on the following expressions: Anger, Fear and Sadness.
Keywords: Facial Expression, intensity estimation, belief theory.
DR. JAFLAH AL-AMMARI AND MS. SHARIFA HAMAD
With the aid of the internet, many organizations and institutes have adopted the idea of applying the e-learning system, which is considered as one of the most important services provided by the internet. The University of Bahrain is one of the Arabic educational institutes that are in its way to adopt the e-learning system and convert all of its courses to be online by 2008. University of Bahrain believes that adopting e-learning system can help in addressing many challenges arise from the widening number of students locally and regionally compared to the available human, technical, and other resources. The purpose of this paper is to investigate the factors
affecting the acceptance and use of e-learning system at the University of Bahrain. Through an extension of the Technology Acceptance Model (TAM), three factors that influence the intention to adopt the e-learning system will be examined. These factors include: computer self-efficacy, content quality, and subjective norms. In addition, some cultural factors that could affect the students' attitude toward using the e-learning system will be also examined. A quantitative research methodology is used based on a survey that was distributed to a sample of 200 students from the University of Bahrain to form the primary data of the research. Regression analysis was used to observe the associations of the proposed constructs of the research
model. Finding of this research can help the University of Bahrain, and any other Arabic educational institutes that intend to
apply the e-learning system, to identify factors that influence the adoption of the e-learning system in their institute. Hence, they should be able to recognize different ways to enhance their students' acceptance of the e-learning system.
Key words: Technology Acceptance Model; Cultural Factors; E-learning.
Amel Serrat, Mohamed Benyettou and Samira Chouraqui
The robot localization problem is a key problem in making truly autonomous robots. If a robot does not know where it is, it can be difficult to determine what to do next. In order to localize itself, a robot has access to relative and absolute measurements giving the robot feedback about its driving actions and the situation of the environment around the robot. Given this information, the robot has to determine its location as accurately as possible. What makes this difficult is the existence of uncertainty in both the driving and the
sensing of the robot. The uncertain information needs to be combined in an optimal way. The Kalman Filter is a technique from estimation theory that combines the information of different uncertain sources to obtain the values of variables of interest together with the uncertainty in these. In this work we provide a thorough discussion of the robot localization problem resolved by Kalman Filter and Adaptive Time Delay Neural Network. The results show promise for using neural networks unlike Extended Kalman Filter.
Keywords: Extended Kalman Filter, Adaptive Time Delay Neural Network, Robot, Localization.
Serrat A., Benyettou M., Benchennane I.
The kinematic problem of serial manipulators comprises the study of the relations between joint variables and Cartesian variables. We distinguish two problems, commonly referred to as the direct and kinematic problems. The former reduces matrix multiplications, and poses no major problem. The inverse kinematic problems, however, is more challenging, for it involves intensive variable elimination and nonlinear-equation solving. In this work, we have used the Artificial Immune System to solve the
inverse problem on a manipulator arm, to determine its various articulations. The results of simulation are presented to show the validity of the approach suggested above.
Keywords: Artificial Immune System, Inverse Kinematic Problem, manipulator.
Fériel Ben Fraj, Chiraz Ben Othmane Zribi and Mohamad Ben Ahmed
In order to construct a generic grammatical resource for Arabic language, we have chosen to develop an Arabic grammar based on TAG formalism. Our choice is, especially, justified by complementarities that we have noticed between Arabic syntax and this grammatical formalism. This paper consists of two comparative studies. The first is between a set of unification grammars. The second is between the characteristics of Arabic language and those of TAG formalism. These comparisons lead us to defend our
choice and to describe the structure and characteristics of ArabTAG: a TAG grammatical model for presenting modern Arabic syntactic structures constructed using XML language. ArabTAG will be useful for Arabic processing applications, especially parsing task.
Keywords: Arabic language, unification grammars, TAG formalism, elementary trees, NLP.
A.H BENYAMINA and H.BENHAMOUDA
The state of quality measurement of process depend on the benchmark instrumentation and constitue a main stage. In the following report, we propose to resolve the problem of sensor placement in a complex process. Under minimal observability constraint we optimize performance criteria. This method is proposed for sensor placement with minimal instrumentation cost. The subsequent study is based on a linear process model where genetic algorithm is used to perform sensor types and locations research. Simulation results show good performance of our algorithm.
Key words: Sensor placement, observability, optimization, genetic algorithm.
BOUKHARI Wassila BENYETTOU Mohamed
Palmprints based personal identification, considered like a new member of the biometrics family, become a very active field of research during these last years. The realized works, until now, were based on palmprints images representation techniques for a better classification. In our work, we focus on classification by using machine learning methods, notably a particulartype of spiking neurons networks, the Liquid State Machine (LSM). The recognition rates were good while using this method by different approaches (higher rate is 98.17%). In addition, the experimental results demonstrate a good performance, of our identification system, in terms of speed (less than one second) which allows us to say that this system is more appropriated for real-time applications.
Keywords: biometry, identification, Palmprint, spiking neurons networks, Liquid State Machine (LSM).
KHOUTIR BOUCHBOUT and ZAHIA ALIMAZIGHI
The importance of inter-organizational system (IOS) has been increasingly recognized by organizations. However, IOS adoption has proved to be difficult and, at this stage, why this is so is not fully uncovered. In practice, benefits have often remained concentrated, primarily accruing to the dominant party, resulting in low rates of adoption and usage, and often culminating in the failure of the IOS. The main research question is why organizations initiate or join IOS and what factors influence their adoption and use levels. This paper reviews the literature on IOS adoption and proposes a theoretical framework in order to identify the critical factors to capture a complete picture of IOS adoption. We obtain findings that suggested that there are five groups of factors that significantly affect the adoption and use decision of IOS in the Supply Chain Management (SCM) context: 1) interorganizational context, 2) organizational context, 3) technological context, 4) perceived costs, and 5) perceived benefits.
Keywords: Interorganizational Information Systems, IOS adoption and use, critical factors, B-to-B relationships.
Fadoua BOUAFIF SAMOUD, Samia SNOUSSI MADDOURI, Haikal EL ABED and Noureddine ELLOUZE
This paper presents an automatic extraction of handwritten Arabic components of complex documents. Two methods are developed for this extraction. The first one is based on Mathematical Morphology (MM). The second one is based on Hough Transform (HT). The developed methods are evaluated on CENPARMI-Arabic Checks Database, in order to extract the handwritten components existing in the check: numerical amount, literal amount and date zone. We present a concept for automatic evaluation of the results, based on label tools for the different parts of used documents. We achieve a correct classification rate of 98% for numerical amount, 96% for literal amount, and 98% for date, extracted by Hough Transform method.
Keywords: Document processing, Extraction methods, Mathematical Morphology, Hough Transform, CENPARMIDatabase.
Intisar A.Majied Al-Said, Nedhal Al-Saiyd and Firas Turki Attia
This paper presents the development of genetic algorithm approach to schedule tasks on a multiprocessor system. The objective is to minimize the make-span i.e. the completion time of all tasks while maintaining the precedence constraints within the task graph. No inter-processor communication overheads are assumed. The array data structure is employed for string representation and a hybrid selection method for reproduction is adopted. The ability of the geneticbased scheduler to deal with resource failures and
apperiodic operations is also explored.
Keywords: task scheduler, genetic algorithms, multiprocessors, parallel computing.
Suhail M. Odeh, Eduardo Ros and Ignacio Rojas
This paper presents a computer aided diagnosis system for skin lesions. Diverse parameters or features extracted from fluorescence images are evaluated for cancer diagnosis. The selection of parameters has a significant effect on the cost and accuracy of an automated classifier. The genetic algorithm (GA) performs parameters selection using the classifier of the K-nearest neighbours (KNN). We evaluate the classification performance of each subset of parameters selected by the genetic algorithm. This classification approach is modular and enables easy inclusion and exclusion of parameters. This facilitates the evaluation of their significance related to the skin cancer diagnosis. We have implemented this parameter evaluation scheme adopting a strategy that automatically optimizes the K-nearest neighbours classifier and indicates which features are more relevant for the diagnosis problem.
Keywords: Genetic algorithm ,K nearest neighbours, skin lesiones , floursence image.
This contribution presents our attempt in developing a robust large-scale parser system1 (AraParse) for analysing written Arabic texts. From a practical point of view, the AraParse is able to analyze unvowelled sentences thanks to a wide coverage of its linguistic
resources. The AraParse system uses a large lexicon generated from the DIINAR.12 Knowledge Data Base and an AGFL (Affix Grammar over Finite Lattice) of the Arabic syntactic structures. Robust parsing is one of the next steps after parser based on broad coverage
resource have been developed. The parser is designed for robustness against noisy and ill-formed data. First we will describe the architecture and the components of the AraParse system. Then the focus turns to describe strategies and solutions which were used in an attempt to create an implementation of a robust parser. We will explain how the system can be extended with features
which are suitable for robust parsing.
Keywords: Arabic language processing, Morphological analysis, Lexicon, Formal grammar, Robust parsing.
Adel ALTI, Mahieddine DJOUDI and Adel SMEDA
A key aspect of the design of any software system is its architecture. An architecture description provides a formal model of the architecture in terms of components and connectors and how they are composed together. COSA (Component-Object based Software Structures), is a software architecture model is based on objectoriented modeling and component-based modeling. This
model improves the reusability by increasing extensibility, evolvability, and compositionality of the software systems. This paper presents the COSA modelling tool which help architects the possibility to verify the structural coherence of a given system and to
validate its semantics with COSA approach.
Keywords: Software Architecture, COSA, Architecture Description Languages, UML 2.0 Modeling Language, Component, Connector.
Dr. Yhya R. Kuraz
A proposed multi wavelet network are used in identification problems of nonlinear systems. A multi wavelet network is constructed as an alternative to a neural network to approximate a nonlinear system. Based on this multi wavelet network approximation, suitable for multi input multi output nonlinear uncertain (or unknown) functions such as robot manipulator.
Keyword: wavelet network, neural network, multi wavelet, multi wavelet network, identification and multi input multi output systems.
Alti Adel, Smeda Adel and Djoudi Mahieddine
Software architecture specification captures system structure, by identifying architectural components and connectors, and required system behavior by specifying how components and connectors are intended to interact. COSA (Component-Object based Software
Architecture) is a software architecture model that provides important capabilities to describe the structural aspects of software systems; however it lacks the support for the behavioral aspects. In this paper, we define the behavioral aspects of software architecture taking COSA as an example. We also provide a UML 2.0 profile for the proposed aspects. This profile includes a set of stereotypes with OCL constraints. We transform these constraints into B specifications as invariants of the B models. This helps in the design of correct software architectures and in the verification of architectural elements behavior by analyzing derived B
specifications using the B-support tools.
Keywords: Software Architecture, COSA+, Behavioral aspects, statemachines, UML-B.
Vickneswaran Jeyabalan, Andrews Samraj and Loo Chu Kiong
The rapid advancement of Brain Machine Interface (BMI) or Brain Computer Interface (BCI) research over the recent years is concentrated to the development of new technologies which adopts the easiest procedures since the expected beneficiaries are of disabled in nature. Most of the locked-in patients possess strong mental ability in terms of imagining and thinking but they are extremely unable to express their views physically. Hence a BMI which does not rely on physical movements or other muscle activity is definitely an added advantage in this arena. The objective of this paper is to identify and classify motor imaginary signals extracted from the left and right cortex of the human brain. The signals are captured using Electro-Encephalogram (EEG) from the C3, C4, and Cz channels of the scalp electrodes and is preprocessed to expose the motor imaginary signals. The features are extracted by implementing an adaptive band pass filter with the combination of frequency shifting and segmentation technique. The result of classification using FuzzyARTMAP articulates the effectiveness of our proposed technique. Positive results are obtained by using only features from one channel. The best results were found from the features of channel C4 and this proves the existence of neuro-science knowledge.
Keywords: Brain Machine Interface,motor imaginary, adaptive filtering,Fuzzy ARTMAP.
Mohamed Hany, Moustafa H. Aly and M. Nassr
Abstract- Comparison between the results of strain and temperature dependent center Bragg wavelength shift have been accomplished using different apodization profiles to produce linearly chirped apodized far off resonance fiber Bragg gratings for the design of an optimum Dispersion compensator. The chirping is made using the ultraviolet (UV) phase mask method.
Keywords: Apodization, Off resonance, Dispersion and dispersion slope, Interferometer, Photosensitive fiber, Strain.
Alti Adel, Mahieddine Djoudi and Adel Smeda
The combination of states exploration approach (mainly model checking) and deductive reasoning approach (theorem proving) promises to overcome the limitation and to enhance the capabilities of each. We are interested in defining a platform for high level model checking using Multiway Decision Graphs (MDGs) within High Order Logic. The platform is based on the logical formulation of an MDG as a Directed Formulae (DF). The DF is defined in the HOL theorem prover where the many sorted first-order logic is characterized as a HOL built-in data type. Then, the HOL inference rules are defined to check the well-formedness conditions of any DF. Based on this formalization, the MDG operations are defined as inference rules and well-formedness proof of each operation is provided. Finally, we present some experimental results to show the performance of the MDG-HOL platform. The obtained results show that this platform offers a considerable gain in terms of automation without sacrificing CPU time and memory usage.
Keywords: Theorem Prover, Multiway Decision Graphs, Inference Rules, Pruning by Subsumption.
S. ANDREWS, LOO CHU KIONG and KANNAN .R
The very aim of every brain-computer interface (BCI) is to translate stimulated brain activity into a relevant computer command. This highly depends on error free processing methods and systematic regression or classification. Classification process is carried out for predicting the categories of test data. The brain processes multiple functions simultaneously resulting in a complex EEG (Electroencephalogram) pattern. Further, the characteristics of the P300 component are difficult to be determined a priori especially when the signals are analysed on single trial basis. Hence it is essential for a strong classifier which can able to characterise the presence of P300 component which is evoked during the target stimuli in the EEG. In this paper, we used the robust classification model Fuzzy ARTMAP to craft an efficient non-invasive P300 based BCI from single trial visual evoked potential (VEP) signals. Fuzzy ARTMAP is an ART network for the association of analog pattern in supervised mode and is capable of overcoming the stability-Plasticity dilemma . In this experiment, the VEP signals extracted during individual trials from able and severely disable
bodied subjects are classified using the feature signal pattern. The high accuracy obtained as classification percentages validates the suitability of our proposed Fuzzy ARTMAP classification for single trial approach
Keywords: Brain-computer interface; P300 component; Fuzzy ARTMAP; Single trial analysis; Visual evoked potential.
Riyad Al-Shalabi, Ghassan Kanaan, Bashar Al-Sarayreh, Ali Al-Ghonmein, Hamed Talhouni and Salem Al-Azazmeh
Many of Natural Language Processing (NLP) techniques have been used in Information Retrieval, the results is not encouraging. Proper names are problematic for cross language information retrieval (CLIR), detecting and extracting proper noun in Arabic language is a primary key for improving the effectiveness of the system. The value of information in the text usually is determined by proper nouns of people, places, and organizations, to collect this information it should be detected first. The proper nouns in Arabic language do not start with capital letter as in many other languages such as English language so special treatment is required to find them in a text. Little research has been conducted in this area; most efforts have been based on a number of heuristic rules used to find proper nouns in the text. In this research we use a new technique to retrieve proper nouns from the Arabic text by using set of keywords and particular rules to represent the words that might form a proper noun and the relationships between them. To extract proper nouns from the retrieved document, we need some information about it and where it was found. First, we mark the phrases that might include proper nouns; second, we apply rules to find the proper noun and we use simple methods (stop wording and stemming) usually yield significant improvements. To test the system we have used 20 articles extracted from the Al-Raya newspaper published in Qatar and Alrai newspaper published in Jordan.
Keywords: Proper noun, Arabic Language, Prefixes, suffixes.
Mohamed Lamine Berkane and Mahmoud Boufaida
Design pattern is a reusable solution to a commonly occurring problem in software design. If the design patterns could be captured and reused in reverse engineering, this one would be very helpful those who develop and maintain software. So there have been many attempts to detect design-patterns during reverse engineering. The majority of these attempts are based on the object-oriented programming for representing patterns. However, several works were interested in the aspect-oriented design patterns, and they show the improvements in terms of better code locality and composability. One of our concerns in this work is to
benefit the design pattern implementation by advantage, using the technics of aspect-oriented programming. We propose a new approach to reverse engineering based on aspect-oriented implementation of design patterns. This paper also describes a tool that implements this new approach. We evaluate our approach by applying the tool on a case study. The goal in this article is to facilitate the software maintenance by extracting the software business logic based aspect-oriented. .
Keywords: Reverse Engineering, Design Pattern, Aspect-Oriented Programming, Business Logic.
Yassine Benajiba, Mona Diab and Paolo Rosso
The Named Entity Recognition (NER) task has been garnering significant attention as it has been shown to help improve the performance of many Natural Language Processing (NLP) applications. More recently, we are starting to see a surge in developing
NER systems for languages other than English. With the relative abundance of resources for the Arabic language and a certain degree of maturation in the state of the art for processing Arabic, it is natural to see interest in developing NER systems for the language. In this paper, we investigate the impact of using different sets of features that are both language independent and
language specific in a discriminative machine learning framework, namely, Support Vector Machines. We explore lexical, contextual and morphological features and nine data-sets of different genres and annotations. We systematically measure the impact of the different features in isolation and combined. We achieve the highest performance using a combination of all features. Combining all the features, our system yields an F1=82.71. Essentially combining language independent features with language specific ones yields the best performance on all the genres of text we investigate.
Keywords: Arabic, Natural Language Processing, Information Extraction, Named Entty Recognition.
AMEL MELIOUH, ELHILLALI KERKOUCHE and AL LAOUA CHAOUI
The design of a supervisory controller for distributed manufacturing process, demands modular modelling and formal analysis. The objective of this paper is to present an automated design based on UML (Unified Modelling Language) modelling and Petri nets
verification. First UML use cases and class diagrams are used to model the manufacturing process. Then a transformation into their equivalent Petri net is carried. The transformation is done based on the combined use of meta modelling and graph grammars
supported by ATOM3 tool. After that INA is used for verification purposes. The work is illustrated by a water bottling line.
Key Words: Distributed Manufacturing systems, Petri nets, UML, supervisory controller, graph grammars, ATOM3.
Khalil H. Al-Shqeerat
An ad hoc network is a collection of mobile hosts that are dynamically and arbitrarily forming a temporary network without connecting to any fixed infrastructure or centralized controller. In such an environment, two mobile hosts that want to communicate may not be within wireless transmission range of each other, but could communicate if other hosts between them also participating in the ad hoc network are ready to forward packets for them. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. We present and evaluate some techniques to increase the efficiency of DSR namely
Enhanced caching and extended usage of cache, improved error handling, load balancing, rerouting during transmission and rerouting notification. Based on results from a simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density, movement rates and traffic intensity. Simulation results show that the combination of the presented techniques not only result in substantial improvement of algorithm efficiency but also reduce the overheads.
Keywords: mobile ad hoc networks (MANETs), mobile hosts, route discovery, routing protocol.
Prof. Dr. BASHIR M. KHALAF and MOHAMMED WAJID AL-NEMA
The main objective of this paper is the development of a new parallel integration algorithm for Solving Boundary Value Problem (BVPs) in Ordinary Differential Equation, (ODEs). This algorithm is suitable for running on MIMD computing systems. We will analyze the stability and error control of the developed algorithm. The treatment of stiff boundary value problems by developed technique have been considered, finally we have generalized the method for higher order BVPs.
Key-Words: Boundary Value Problems, Ordinary Differential Equation, Parallel Algorithms, MIMD Computers.
Mohamed Redha Bahri, Rabah Mokhtari and Allaoua Chaoui
The technology of mobile agents obtained recently more importance not only because of its capacity of developing and building a distributed, heterogeneous, and interoperable systems, but also because of its robustness development of mobile and
communication network as well. However, there are few works dealing with the methods and tools of analysis and design of the mobile agents systems. Furthermore, the mobile systems have introduced new concepts as: migration, cloning and the locations. We propose in this paper an extension of the most important UML 2.0 diagrams to model the mobile agents systems with the objective to face these three concepts.
Key Words: Mobile Agents, Location, Migration, Cloning, UML, Modelling.
Djamal BENNOUAR, Abberrezak Henni and Abdelfettah saadi
Software development using software architecture approaches and aspect oriented programming represents today a very promising way for the design of high quality software at lower costs. The Integrated Approach to Software Architecture (IASA) is an Aspect Oriented Software Architecture Approach using a component model totally independent from any software mechanism, mainly the interface concept. The IASA component model provides facilities not supported by nowadays software architecture tools to easily specify any topology an architect can imagine. It is used here to show how it is easy to design at a high level of abstraction, an EGovernment application using an Aspect Oriented approach.
Keywords: Software Architecture, Component, Port, Connector, Aspect, E-government.
M. Hassan., I. Osman. and M. Yahia.
This Paper proposes some facial features extraction approaches for face recognition depends on traditional frequency transforms:
Discrete Sine Transform (DST), Walsh- Hadamard Transform (WHT), Discrete Hartley Transform (DHT) and Combined Discrete Sine transform and Walsh-Hadamard transform. We suggest Alternative Local Linear Regression approach (ALLR) to generate virtual frontal faces from non-frontal or combination of frontal and non-frontal images selected from face database of Olivetti Research Laboratory (ORL), the Euclidian distance form uses for recognition. The paper obtained good result for new facial features extraction: DST and WHT obtained better results than DCT, and DST combined WHT is obtained higher result. The application of ALLR obtained 12.65% improvement over DST+WHT.
Keywords- Face Recognition, Facial Feature Extraction, Discrete Sine Transform, Discrete Walsh-Hadamard Transform, Discrete Cosine Transform, Discrete Hartley Transform, Principal Component Analysis, Local Linear Regression.
Souheila BOUDOUDA and Mahmoud BOUFAIDA
Modeling and managing business processes of advanced and distributed applications that span multiple organizations involves new challenges, mainly regarding the ability to cope with change on the wide variety of languages and technologies which are heterogeneous in permanent evolution. The goal of our work is to contribute to the field of designing advanced and distributed applications. Consequently our proposed framework is based on the principles of models of MDA (Model Driven Architecture), which unifies and simplifies modeling and design of applications. To this end, we present a conceptual meta-model that makes the
development of specific models easier. These models are stored in a repository. This repository consists of a set of reusable software components intended to facilitate and automate the application design.
Keywords: repository, meta-model, shared models, Model Driven Architecture.
Lahsen Abouenour, Karim Bouzoubaa and Paolo Rosso
With the expansion of the content available on the web, Question/Answering (Q/A) systems have become, among other searching tools, a focus of researchers and users as well. For the Arabic language very few works have been done in this field. In this paper, we focus on the improvement of Q/A through a Query Expansion (QE) process. Our approach is based on the ontology that we have built using Arabic WordNet. Indeed, we designed a QE process from the semantic relations existing among the concepts of our ontology. The preliminary experiments that we have conducted show that the accuracy of getting the answer expected was
improved by our QE approach.
Keywords: Question/Answering, Query Expansion,Ontology, Arabic WordNet, Semantics, Morphology.
Samy M. Ayesh and Khaled M. Mahar
In this paper, a robust watermarking scheme is proposed to embed a watermark in the detailed sub-bands of a directional filter bank (DFB) decomposition of an image using a quantization process. The proposed approach uses the DFB to overcome the lack of directionality associated with discrete wavelet transform (DWT). Thus, it achieves more robustness than DWT-based methods. The algorithm starts by generating a binary logo watermark which is permuted and embedded by using the blind self image logo watermarking (SILW) algorithm. The robustness of the proposed algorithm is verified against a variety of attacks including watermark removal and synchronization removal attacks. The proposed scheme is compared with a DWT-based blind SILW. The results show that the directional frequency domain gives better robustness under similar embedding conditions than wavelet domain.
Keywords: Image Watermarking, SILW, Robustness, Quantization, DFB, Haar Filter.
Osama M Badawy, Abd-Elhay A Sallam and Mohamed I Habib
Over the years, data mining has attracted most of the attention from the research community. The researches attempt to develop faster, more scalable algorithms to navigate the ever increasing volumes of data in search of meaningful patterns. Association rules are a Data mining technique that tries to identify intrinsic patterns in large data sets. It has been widely used in different applications, a lot of algorithms introduced to discover these rules. However most of the algorithms used intend to discretize all numeric attributes, which leads to loss of knowledge. In this paper, an algorithm for mining quantitative association rules using swarm intelligence is introduced. The algorithm intends to discover optimized intervals in numeric attributes without the need for the discretization process. Although the algorithm does not need the minimum support and confidence, instead it looks for the best
support that conform a frequent itemset. The algorithm is tested with both synthetic and real datasets, and the results show that the algorithm provides better accuracy when compared to other algorithms used for quantitative rules.
Keywords: Swarm Intelligence, Data Mining, Association Rules.
Educational data mining concerns with developing methods for discovering knowledge from data that come from educational environment. In this paper we used educational data mining to analyze learning behavior. In our case study, we collected students' data from DataBase course. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we extracted knowledge that describes students' behavior.
Keywords: Educational Data Mining, E-Learning, Learning Management Systems.
Haya El-Ghalayini and Faten Kharbat
From the fact that ontologies can help in making sense of huge amount of content, this paper proposes a new approach for building ontology via set of rules generated by rule-based learning system. The proposed method integrates different data mining techniques to assist in developing a given domain ontology. This is done by utilising the extracted and representative rules generated from the original dataset to develop ontology elements.
Ahmed Elwhishi, Issmail Ellabib and Idris. El-Feghi
Abstract Location Area(LA) planning has critical impact on cellular network design. It is characterized by the tradeoff between the location updates overhead and the paging cost. The LA size could be made as large as the service area of main switching centre, and hence
minimizing the registration overhead at the expense of maximizing the paging cost. On the other hand, if each cell is taken as an LA, the paging cost is minimized while the registration cost is maximized. Accordingly, the tradeoff between these two factors must be optimized in such a way that the total cost of paging and registration can be minimized along with the link cost. Due to the complexity of this problem, Meta-heuristic techniques are often used for analyzing and solving practical sized instances. In this paper, we propose a solution to the LA planning problem based on Ant Colony Optimization technique. The performance of the proposed approach is investigated and evaluated with respect to the solution quality on a range of problem instances. Experimental results have shown that the Ant Colony Optimization approach outperforms other approaches such as the Simulating Annealing approach for all problem instances.
Keywords: Ant Colony Approach, Simulated Annealing, Cell-to-Switch Assignment problem, Location Management in Cellular Networks, Swarm Intelligence.
AHMAD AWWAD and JEHAD AL-SADI
This paper proposes a new network called OTISArrangement Network which is constructed from the cross product of the factor Arrangement network. In this paper, we utilize the features of OTIS networks which use both of electronic and optical networks
where many recent studies have showed that OTIS is one of the most promising candidates' networks as future high-speed parallel computers. A general study on the topological properties for the OTISArrangement by obtaining the main topological properties including size, degree, diameter and number of links and then we propose an efficient routing algorithm. The proposed routing algorithm and the derived properties can be used in general for all OTIS-Network, which will save the researchers effort to work on each on the OTISNetworks individually. This study provides new means for further testing the possibility of the OTISArrangement
as alternative parallel computer network.
Keywords:- Parallel and distributed systems; Interconnection Networks; Arrangement network; OTIS-Networks; Topological properties; Routing Algorithm.
ABDELSALAM ALMARIMI, ANIL KUMAR, IBRAHIM ALMERHAG and NASREDDIN ELZOGHBI
In e-applications security, integrity, non-repudiation, confidentiality, and authentication services are the most important factors. This paper deals with the confidentiality of electronic data which is transmitted over the internet. We propose a new approach for e-security applications using the concept of genetic algorithms with pseudorandom sequence to encrypt and decrypt data stream. The feature of such an approach includes high data security and high feasibility for easy integration with commercial multimedia transmission applications. An experiment testing feasibility is reported in which several images are encrypted and decrypted. The experimental results show that the proposed technique achieved high throughput rate that is fast enough for real time data protection.
Keywords: Cryptography, Pseudo-random Sequence, Genetic Algorithms.
O.Nouali, S.Kirat, H.Meziani
With the explosive growth of the quantity of new information the development of information systems to target the best answers provided to users, so they are closer to their expectations and personal taste, has become an unavoidable necessity. The collaborative filtering systems are among these information systems with particular characteristics that make the difference. The term refers to collaborative filtering techniques using the familiar tastes of a group of users to predict the unknown preference of a new user. This article describes a basic platform of collaborative filtering, which allows users to discover interesting documents,
through automation of the natural process of recommendation, it allows them to express their opinion about the relevance of documents, according to their tastes and documents' quality they perceive; it offers the opportunity to benefit from the evaluations on documents of other users, with similar profile, have found interesting. All these benefits are provided to users by the principle
of collaboration, in return for an individual effort: evaluating documents.
Keywords: Collaborative filtering, Community, Platform, Predictions, Recommender systems.
Lotfi SALHI - Adnène CHERIF
Abstract In this paper we present a new method for voice disorders classification based on a gammachirp wavelet transform and multilayer neural network (MNN). The processing algorithm is based on a hybrid technique which uses the gammachirp wavelets coefficients as input of the MNN. The training step uses a speech database of several pathological and normal voices collected from the national hospital "Rabta - Tunis" and was conducted in a supervised mode for discrimination of normal and pathology voices and in a
second step classification between neural and vocal pathologies (Parkinson, Alzheimer, laryngeal, dyslexia...).
Several simulation results will be presented in function of the disease and will be compared with the clinical diagnosis in order to have an objective evaluation of the developed tool.
Mebarka YAHLALI and Abdellah CHOUARFIA
The objective of CBSE (Component-Based Software Engineering) is the development of big software by integrating of existing components. The traditional concept of applications development by writing code was replaced by the assembly of prefabricated components. The goal of the assembly is to reach a coherent application from a set of software components. We present in this article a method enabling the evaluation of the quality of software components assembly. This method allows us choosing the best
components' composition in order to obtain the system required by the user in term of quality (non-functional needs).
Keywords: Software Components, Assembly, Quality model, Quality of assembly, quality factors, quality criteria, quality metrics.
The size and complexity of databases used in many applications are rapidly increasing with time. In these environments there is a need for users who are not familiar with SQL and who don't have knowledge of the database schema to be able to query such databases. In response to this need, many researchers have introduced the capability of querying a database based on a list of keywords. A user does not have to state a full SQL query, but just provide the list of keywords that seem to be of interest. The system would then return the resulting records from different tables that appear to be close to what the user is looking for, based on the list of keywords that he/she provides. There are two central problems with this approach that need to be addressed and solved. One central problem is to improve the efficiency of finding records that are relevant to the query. Another central problem is that to improve effectiveness. Which is the ability of the system to return the relevant records only. In this paper, we introduce a model to improve efficiency and effectiveness in database. We also present the results of our performance analysis of keywordbased
searches when this model is used.
Keywords: relational database, text database, efficiency, effectiveness, ranking, similarity.
QASEM A. AL-RADAIDEH
It has been our experience that in order to obtain a fair comparison between supervised learning approaches, it is necessary to perform the comparison using a unified classification approach. This paper presents an experimental study regarding the issue of classification algorithms evaluation approaches where two approaches are evaluated, the Houldout and the Cross Validation methods. The Rough Set Theory based classification is used as a classification technique. The impact of the evaluation approach on the classification results is discussed and at the end, some guidelines for classification algorithms comparisons are recommended.
Keywords: Data Mining, Rough Set Theory, Classification Evaluation Methods, Classifier.
ABDUELBASET M. GOWEDER, TARIK RASHED , ALI S. ELBEKAIE and HUSIEN A. ALHAMMI
Nowadays, e-mail is widely becoming one of the fastest and most economical forms of communication .Thus, the e-mail
is prone to be misused. One such misuse is the posting of unsolicited, unwanted e-mails known as spam or junk e-mails.
This paper presents and discusses an implementation of an Anti-spam filtering system, which uses a Multi-Layer
Perceptron (MLP) as a classifier and a Genetic Algorithm (GA) as a training algorithm. Standard genetic operators and
advanced techniques of GA algorithm are used to train the MLP. The implemented filtering system has achieved an
accuracy of about 94% to detect spam e-mails, and 89% to detect legitimate e-mails.
Keywords: Artificial Neural Networks, Genetic Algorithms, Spam Emails, Legitimate Emails, Arabic Spam, Text
Taher Omran Ahmed
Decision support systems are usually based on multidimensional structures which use the concept of hypercube. Dimensions are the axes of analysis and form a space where a fact is located by a set of coordinates at the intersections of members of dimensions. Conventional multidimensional structures deal with discrete facts linked to discrete dimensions. However, when dealing with natural
continuous phenomena the discrete representation is not adequate. There is a need to integrate spatiotemporal continuity within multidimensional structures to enable analysis and exploration of continuous data. There is a multitude of research issues that lead to the integration of spatiotemporal continuity in multidimensional structures. In this paper, we discuss some of these, present briefly a multidimensional model for continuous field data. We also define new aggregation operations. The model and the associated operations and measures are validated by a prototype.
ABDUELBASET M. GOWEDER, HUSIEN A. ALHAMMI and TARIK RASHED
There are several stemming approaches that are applied to Arabic language, yet no a complete stemmer for this language
is available. The existing stem-based stemmers for stemming Arabic text have a poor performance in terms of accuracy
and error rates. In order to improve the accuracy rates of stemming, a hybrid method is proposed for stemming Arabic
text to produce stems (not roots). The improvement of the accuracy of stemming will lead by necessity to the
improvement of many applications very greatly, including: information retrieval, document classification, machine
translation, text analysis and text compression. The proposed method integrates three different stemming techniques,
including: morphological analysis, affix-removal and dictionaries.
Keywords: Stemming, Information retrieval, Affix-Removal, Arabic stems, Highly inflected languages.
Meriem Zaïter, Salima Hacini and Zizette Boufaïda
Trust is an important concept in our daily life. It deals with the natural notion that every person use systematically to decide if an exchange with another person will be possible or not. So, it is found in all kinds of communication in all what gathered two parts.
Moreover, the development of the information technology has leaded to the intensive use of several applications such as the electronic commerce. In this field, mobile agents are the most suited technology. However, they bring a serious security risk. The
majority of these applications require a honest host for the mobile agent execution. The main objective of this paper is to identify a set of metrics that permit to estimate the visited host trustworthiness in order to execute the mobile agent only if the honesty of this host is proved. Thus, the trust evaluation is used to protect the mobile agent from malicious hosts.
Keywords: distributed system, e-commerce, mobile agent security, trust, trust metrics.
Eng. AHMED HAMDI ABU ABSA and Dr. SANA'A WAFA Al-SAYEGH
In this paper we explain the details of the implementation of a computer program which employs Genetic Algorithms (GAs) in the quest for an optimal lecture timetable generator. GA theory is covered with emphasis on less fully encoded systems employing nongenetic operators. The field of Automated Timetabling is also explored. A timetable is explained as, essentially, a schedule with constraints placed upon it. The program, written in java, that has a good object oriented to do it, and it has the special libraries to deal with genetic algorithm which will be used for the implementation. In a simplified university timetable problem it consistently evolves constraint violation free timetables. The effects of altered mutation rate and population size are tested. It is seen that the GA could be improved by the further incorporation of repair strategies, and is readily scalable to the complete timetabling problem.
Josef Börcöck, Ali Hayek, Bashir Machmur and Muhammad Umar
This paper discusses the process of implementing the synthesis netlist of the CPU design into a target FPGA device. The place and route tools read the netlist, extract the components and nets from the netlist, place the components on the target device, and interconnect the components using the specified interconnections. Place & Route, also referred to as PAR, follows synthesis and simulation. PAR is actually a subset of a larger EDA stage known as design implementation. Efficient placement and routing algorithms play an important role in FPGA architecture research. Together, the place-and-route algorithms are responsible for producing a physical implementation of an application circuit on the FPGA hardware. Plan ahead is new design tool from Xilinx that is used for floor planning.
Keywords: FPGA, VHDL Plan Ahead, Place & Route, 1oo2 system.
Josef Börcsök, Ali Hayek, Bashier Machmur and Muhammad Umar
Target of this paper is to develop an Embedded System using Soft-core Processor in FPGA. The Embedded Systems can be found in many areas, like in the usual life but also in the Industry. The wide spread and especially the increasing complexity require new
strategies and more flexibility. Since the software already reached an area of flexibility, the focus goes more to the hardware. Flexibility of hardware means to change the microcontroller or processor for the new requirements, without putting a new hardware in the system. This kind of strategy is only reachable with the aid of Field Programmable Gate Array (FPGA). On this Soft-core Processor all kind of Applications can run, for example uClinux and detecting temperature from PT100 and showing the results on web server are used for this work.
Keywords: Embedded System, FPGA, VHDL, IP-Core, MicroBlaze, uClinux, PT100 Sensor.
Mohamed Amine Klâa, Soufien Touj and Najoua Essoukri Ben Amara
Abstract We propose in this paper an indexing by recognition approach applied to Arabic printed documents images archives. Indeed, our goal is to restore the set of the images susceptible to contain the key word or all the key words introduced by the user as a
request. To fulfill this goal, we started with the creation of the prototypes of Arabic printed characters taken in their different shapes; thereafter we have implemented a module of analysis of the user's textual request as well as a procedure of synthesis of sub words models. We developed a connected components detection procedure which constitutes the preprocessing phase of the recognition stage. Finally, we implemented a module of identification of the word request in a basis of image of Arabic documents Printed.
Keywords: indexing by recognition, Arabic printed documents images.
Razika DRIOUCHE and Zizette BOUFAIDA
Today, enterprise application integration is still facing many problems and the crucial one is the semantic heterogeneity. This problem is not adequately addressed by today's solutions that focus mainly on the technical and syntactical integration. Using the semantic aspect will promote enterprise applications by providing it more consistency and interoperability. The development
of ontologies to encapsulate the applications heterogeneity may produce new obstacles for application ontologies integration. This article will propose a mapping process in order to address the semantic integration problem. Ontology mapping is important when working with more than one ontology. Typically similarity considerations are the basis for this. In this paper an approach to integrate various similarity strategies is presented. In brief, we determine also similarity through rules which have been derived by ontology experts.
Keywords: Enterprise Application Integration, Application ontology, Mapping, Similarity Rules.
Ismail I. Hmeidi , Hassan M. Najadat and Ahmed J. Al-Sha'or
This paper proposes a modified approach for the concept based query expansion method referred to as a New Approach for Automatic Query Expansion using Genetic Algorithm (NAQEGA). This technique employs query concept to find the most similar term to the query and adds this term to the query to form another query terms. This process is repeated until there is no more
similar term to the query concept. Furthermore, an automatic GA method is proposed to overcome exhaustive calculation in finding the most similar term to the query concept and adding it to the query. Also, a fitness function is addressed in GA to fulfill the need of finding and adding the most similar term to the query. The genetic operators in the GA applied in this paper are the partially matched multipoint crossover method to deal with the crossover operations and the inversion mutation method to deal
with the mutation operations. The proposed algorithm improves the average recall rate by 9% when compared to the Concept based Query Expansion, the average precision rate is enhanced in the proposed technique by 5%, and the average harmonic mean of retrieved documents with respect to the user's queries is changed by 8%.
Keywords: Automatic Query Expansion, Genetic Algorithm.
Hassan M. Najadat, Mohammad K. Kharabsheh and Ismail Hmeidi
This paper introduces a new algorithm called User Association Rules Mining (UARM) for solving the problem of generating inadequate large number of rules in mining association technique using a fuzzy logic method [1, 2]. In order to avoid user's defined
threshold mistakes, the user has flexibility to determine constraints based on a set of features. In comparison with other well-known and widely used association rules algorithms, such as Apriori algorithm, UARM attempts to give more enhancements to the
problem and adopts significance of association rules mining to enhance the quality of the application by providing insightful clues to more effective decisionmaking.
Keywords: Data Mining, Association Rules, Minimum Support, Fuzzy Logic.
Sonia. Gueraich and Zizette. Boufaida
In this paper we present a semantic annotation method based on ontologies for managing a corporate memory. Our approach is based on four concepts; marguerite model, ontology, answers garden and annotation. The first concept allows managing the corporate memory by capitalizing information and knowledge. In our work, the initial marguerite model is modified to our needs.
By using ontologies which describe the users and the domain (enterprise), we attempt to provide the necessary concepts and roles for the instantiation process needed by the annotation method. We have introduced the semantic annotations in order to build a corporate semantic web. They are operational annotations and destined to be manipulated by machines. The idea of the integration of the "Answer Garden" philosophy in the annotation method becomes from two considerations: the first is the materialization of the sources of the memory as web documents. The second is to assist the annotation activity in the
annotation of the corporate memory. In the implementation phase we rely on XML. It provides a strong level of the representation of the annotation process (RDF) and the ontologies (OWL). The major purpose of the approach is to permit the building of an
interoperable corporate semantic memory imbedded in a corporate semantic web. Keywords: Corporate Memory, Semantic Annotation, Ontology, Answer Garden, Marguerite Model.
Sana FAKHFAKH and Walid MAHDI
In this paper, we present our study of the problem of face authentication. Because of the differences in human pose, face expression, hairstyle, image style and lighting conditions, the problem seems to be complex. To solve it we have tried to represent face images with discriminates feature vectors. However the overlapping of face characteristics is considered as a critic problem
in this domain .Added to that the choice of face features is a challenge task. In this view, we present 3 features based on different image processing tools and heuristics for robust recognition. Our approaches are based on face geometric model and calculating a Euclidean distance between the points checked, extract fiduciary points by Harris detectors and generate a map of face
and the least method corresponds to generate a unique mask for each face. Finally, we try to combine their methods. We have tested our system with our personal databases of 50 subjects and each of them with 5 expressions .The rate authentication is nearly 95%.
Keywords: face descriptors, Harris points, face mask, geometric model, map of face, multi-describers.
Belhassen AKROUT, Imen KHANFIR KALLEL, Chokri BENAMAR and Boulbaba BEN AMOR
Iris recognition, a relatively new biometric technology, has great advantages, such as variability, stability and security, thus is the most promising for high security environment. Iris recognition is proposed in this report. We describe some methods, the first one is based on grey level histogram to extract the pupil, the second is based on elliptic and parabolic HOUGH transformation to determinate the edge of iris, upper and lower eyelids, the third we used 2D Gabor Wavelets to encode the iris and finally we used the Hamming distance for authentication.
Keywords: Iris extraction, Gabor wavelet, Hough transform, Hamming distance, iris coding.
Fatima Zohra Younsi, Djamila Hamdadou and Karim Bouamrane
Make good decisions on territory planning is a task that has become increasingly difficult. Several classification methods have been developed in recent years each has its own advantages and use limitations. New artificial intelligence technology allows knowledge processing to be included in decision–support tool. The application of Artificial Neural Networks (ANN), to predict the
behaviours of nonlinear systems has become an attractive alternative to traditional statistical methods. This article aims to propose a decision-making model structured by the application of Geographical Information Systems (GIS) that can describe and
analyze the decision and Artificial Neural Networks (ANN) context to synthesize these data to assist decision makers in their choices. This model presents a unique combination of those important tools. It is also a decision model defining the various features of the main actors in the process. Example is given as the evaluation model in a part of Switzerland.
Keywords: Territory Planning (TP), Spatial Decision Support, Geographic Information Systems (GIS), Artificial Neural Networks (ANN), Cooperation, Criterion-Map and Decision- Map.
Najla Sassi Jaziri, Wassim Jaziri and Faiez Gargouri
Ontologies recently have become a topic of interest in computer science since they are seen as a semantic support to explicit and enrich data-models as well as to ensure interoperability of data. In the field of ontology engineering, mainly when using
ontologies in dynamic environments, supporting ontology's evolution becomes essential and extremely important. It enables users to integrate changes and to treat different ontology versions. An important aspect in the evolution process is to guarantee the consistency of the ontology when changes occur, considering the changes semantics. Formalizing the changes semantics requires a
definition of the ontology model together with its change operations, the consistency conditions and a set of rules to enforce these conditions. We study in this paper the problem of an ontology evolution. We also propose formalization, using the Z
language, of the different types of changes occurring in evolutionary environments.
Keywords: Ontology, evolution process, formalization, evolution changes, integration of new knowledge.
Djamel Eddine Saïdouni, Nabil Belala, Messaouda Bouneb, Abdeldjalil Boudjadar and Boulares Ouchène
This work deals with the specification and the verification of concurrent systems. Our goal is to exploit an implementable model, namely the maximality -based labeled transition system, which permits expressing true-concurrency in a natural way without
splitting actions on their start and end events. To do this, we give an operational semantics to build maximality-based labeled transition systems for Place/Transition Petri nets.
Keywords:Maximality-based labeled transition systems, Maximality bisimulation, Petri nets.
Mr. Chaker MEZIOUD
Users are more and more demanding towards their software. They wait for a big reliability, for a number ceaselessly increasing of services, the respect for constraints of conviviality, for cost ... Therefore, the size and the complexity of the software increases. The current techniques do not allow protecting us from problems of conception, now they are more expensive to us to detect them during the phase of the implementation. That is why we need a high-level modeling of the system; this level of abstraction is called the architecture of the software, which serves for modeling, for analyzing and for testing the most important aspects of the
development to obtain software more safe and in a faster way. . The idea leads us to think of a work on the conception
and the development of a new architecture description language (ADL), based on a formal model in the mathematical sense, and focuses on the formal description of the software architectures (the structure and the behavior - dynamics, time of execution). In other words, we direct to a language of description of architecture (π -ADL) having a formal foundation, that is π-calcul. This last one is a very powerful language (dynamics and evolving in the time, and the PVS system (Prototype Verification System), based on the logic of superior order, for purpose to analyze dynamic and mobile software architectures. What asks us for the presence of a formalism of conversion of important set concepts of the language π-calcul, in terms of the other concepts of the language PVS.
Keywords: software Architecture, language of description of architecture, formal language, higher order logic, system of check of proofs.
Md. Shariful Islam Bhuyan and Reaz Ahmed
In spite of being a successful syntactic theory in many respects, Head-driven Phrase Structure Grammar (HPSG) has inadequate coverage for morphological constructions, especially for nonconcatenative morphology, which is prominent in the Semitic
languages such as Arabic, Hebrew etc. In this paper, we extend the HPSG framework to support rich nonconcatenative morphology of the verbal system of Arabic, the best instance of nonconcatenative morphology among the living languages. We also
introduce necessary features for syntactic and semantic aspects of an Arabic Verb.
Keywords: Nonconcatenative Morphology, Head-driven Phrase Structure Grammar, Arabic Verbal Morphology, Constraint-based Grammar.
Y. Benzian and N. Benamrane
Image Segmentation is an essential and complex process in artificial vision and in particular for medical imaging, its results are of great utility for the other imaging processes. We propose in this paper a segmentation approach based on level sets which incorporate low scale cooperative analysis of both image and curve. The image at a low resolution level provides information on
coarse variation of gray level intensity. For the same perspective, the curve at a low resolution scale provides a coarser curvature value. The purpose of image scale cooperative approach is to avoid stopping the curve evolution at local minima of images.
This method is tested on a sample of a 2D abdomen image, and can be applied on other image types. The
results obtained are satisfying and show good precision of the method.
Keywords: Segmentation, Level Sets, Curvature, Image scale.
Dr. Adnan Shaout and Tejas Chhaya
For the major organizations, businesses and government agencies the biggest constraints are cost, schedule, reliability, and quality for a given software product. And hence, more and more emphasis is put on software processes asking software engineers to follow it. The goal of this paper is to present a modified software process model using the Personal Software Process SM (PSPSM), Team Software Process SM (TSPSM) and six sigma. The new process model was used for an embedded systems project in automotive industry with 'Moderately complex and medium size'. The result of using this new process model has show 70% improvement in
Nabil OUAZENE and Azeddine BILAMI
In multi-hop ad hoc networks, queuing resources become unavailable under high load. When the traffic increases, the received frames are generally dropped and retransmitted later by the senders after a short timeout. This situation led to more conflicts in accessing the wireless medium and decreases the network performances. In this paper, we propose a new scheme to avoid the bad utilization of wireless medium in high load conditions. Simulations under NS2 have been conducted to study QoS parameters, especially throughput, delay and jitter. The obtained results show that the performances of our proposal are better than
those of IEEE 802.11.
Keywords: MAC, 802.11, full queue, frame dropping.
SUHAIL ODEH, JOSEPH HODALI, MAHA SLEIBI and ILYAA' SALSA'
Our Non-invasive brain computer interface (BCI) uses EEG signals and beta frequency bands over sensorimotor cortex to control cursor movement horizontally (i.e., onedimension 1-D). The main goal of this study is to help people with sever motor disabilities (i.e. Spinal cord injuries) and provide them a new way of communication and control options by which they can move the cursor in one dimension. In this study, offline analysis of the data collected was used to make the user able of controlling the movement of
the cursor horizontally (i.e., one dimension 1-D). The data was collected during a session in which the user selected among two targets by thinking and moving either the right hand little finger or the left hand little finger. The ANFIS (Adaptive-Network based Fuzzy Inference System) algorithm was examined for the classification method with some parameters. In the offline analysis, the
method used showed a significant performance in the classification accuracy level and it gave an accuracy level of more than 80%.
This result suggests that using the ANFIS algorithm will improve online operation of the current BCI system.
Keywords: Brain-Computer Interface (BCI), ANFIS algorithm, Fuzzy logic, Electroencephalogram (EEG).
M. Guerroumi, N. Badache and S. Moussaoui
MAC protocols must perform the functionality required by the application while utilizing the limited resources available on sensor nodes. Limited energy resources place strict limits on the operations a sensor node may accomplish and differentiate sensor networks from other networks. Application and protocol designers must utilize the hardware resources on the sensor nodes judiciously to conserve energy and prolong the network lifetime. In this paper we present a new solution to improve collision avoidance and optimize the energy consumption.
Keywords: Collision avoidance, Energy consumption, Wireless sensor network.
WIEM KHLIF, MOHAMED TMAR and FAEIZ GARGOURI
During several years, various works were proposed to evaluate software quality. However, few works were elaborated to evaluate design quality. Our main objective here is to allow evaluating a given object oriented model quality at the object-oriented
design level since the first phases of analysis and modelling and especially before reaching the implementation lead. First, we present design metrics and heuristics. So far a better use of these heuristics, we associate them to the different unified process phases and activities. Our contribution consists also in proposing criterion to judge and to improve a design quality, especially sequence diagrams.
Keywords: Design heuristics, unified process, object oriented design quality, design metrics.
Makhlouf Derdour, Nacira Ghoualmi-Zine and Philippe Roose
Abstract Mobile networks are moving towards integrating the content of the media increasingly rich and varied
exchanged between various devices, which leads to the appearance of several problems such as the incompatibility of
mobile devices and their limited capacity. With the need to provide access to content and multimedia applications for
mobile devices, multimedia information must be adapted to aircraft and even in this mobile environment and changing.
To this end, a set of methods, languages, formats and protocols are developed, especially by the W3C. This article
discusses the problem of adaptation to mobile devices, where we consider the context of the client and also the
environment where the customer's request arrives. In order to ensure a good communication between devices or
individuals, while addressing the problem of disability and the problem of incompatibility between these devices and even
in the communication path. The purpose of this article is first, giving a precise definition for adaptation in our context of
work and then proposed an information system capable of resolving the problems of adjustments in mobile devices, this
system consists a set of processes, each process provides a precise function. For each type of adapting it uses a number of
processes, the selection process depends on the quality of service we want to have (the speed of adjustment, the
compression ratio, the quality of the components of a message...).
Key words: Multimedia Messaging Service (MMS), mobile Device, heterogeneity, information system, Transcoding, Transmoding.
Laïd Kahloul and Allaoua Chaoui
Code mobility area evolves speedily. Lots off technologies as programming languages and platforms are proposed. These technologies allow the spread of the field in various domains. Many applications have been realized. Unfortunately, code mobility software engineering hasn't followed the required speed. Thus, applications carried out suffer of compatibility, interoperability and security problems. The objective of this work is to present an overview about evolution in code mobility software engineering. Through this presentation, we will discuss the last proposed approaches and we will show some ambitious formal issues.
Keywords: Code mobility, design paradigms, software engineering approaches, formal methods.
Safaa E. M. Eltahir and Dr. Mohamed E. M. Musa
Reliability and stability are desirable in most systems, because mistakes can be extremely costly. Writing correct specifications can lead to this needed reliability. Formal languages help in writing precise specifications. However, formal modeling practitioners admit that theproduction of formal specifications is not an easy matter, and requires experience. For these reasons, the graphical
notations generally and UML specially are more popular, in spite of the fact that they lack formal semantics. An integrated semi-formal tools considered promising suggestion of getting the benefits of both and minimize their shortcomings. This paper introduces an experiment on using an integrated semi-formal tool (RoZ) for generation of formal Z specifications in order to evaluate to what extent, using integrated semi-formal modeling tools is successful, practical and could be done by
inexperienced modelers. Our animation-based experiment gives positive result and suggests more research work to be done to improve and support integrated semi-formal tools.
Keywords: formal specifications, semi formal, UML, Z, RoZ tool, Jaza animator.
This paper analyses the investment incentives in a vertical separation structure with symmetric costs and its impact on firms'behaviors on the downstream market which are competing under Bertrand duopoly and are differenciated horizontally à la Hotteling. We find that when firm less value the investment made upstream, it will be more encouraged to deviate from
the collusive agreement. It is also shown that firms will have more incentives to collude when the difference between their ability to use infrastructure investment will increase too.
Keywords: Networks industries, investment, collusion.
Nader Nada, Mohamed Kholief, Shehab Tawfik and Noha Metwally
Abstract One of the main objectives of educators is to identifying inspiring and interactive approach to learning, and to encourage students to be more receptive and co-operative in the classroom. To help educators in achieving these goals we employed constructivist epistemology and constructivist cognitive psychology, together with the use of Mind Maps and Dynamic Mobile Knowledge (DMK)
Toolkit. The toolkit can serve as the foundation for a new kind of integration of Internet resources and all classroom, laboratory, field experiences, and when used with "expert skeletal" Mind Maps to scaffold learning. It is our thesis that good theory-based use of the appropriate technology can increase the benefits of using Mind Maps in education and lead to dramatically improved education. In this paper we first explored the Mind Maps Concept, then we presented and explained the advantages of DMK toolkit and how this can support mind mapping and integration of a whole array of learning experiences. In the last section we presented two case studies to provide the evidence of how the DMK toolkit and Mind Maps can lead to education paradigm shift and enhance the outcome of the learning experience in higher education.
Keywords: Mind Map, Higher education, Mobile Knowledge, DMK toolkit.