Nowadays, the number needed to treat became the most important parameter in reporting the treatment effects in clinical trials, from binary outcomes such as “positive” or “negative”. Defined as a reciprocal of the absolute risk reduction, the number needed to treat is the number of patients who need to be treated to prevent one additional adverse even. In medical literature, the number needed to treat is reported usually with its asymptotic confidence intervals, method that is used by the most software packages even if it is knows that is not the best method. The aim of this paper is to introduce three new methods of computing confidence intervals for number needed to treat/harm.Using PHP programming language was implementing the proposed methods and the asymptotic one (called here IADWald). The performance of each method, for different sample sizes (m, n) and different values of binomial variables (X, Y) were asses using a set of criterions: the upper and lower boundaries; the average and standard deviation of the experimental errors; the deviation of the experimental errors relative to imposed significance level (α = 5%). The methods were assess on random binomial variables X, Y (where X < m, Y < n) and random sample sizes m, n (4 ≤ m, n ≤ 1000).The performances of the implemented methods of computing confidence intervals for number needed to treat/harm are present in order to be taking into consideration when a confidence interval for number needed to treat is used.

Investigation (determination) of chemical compounds properties need time and many resources when is performed by classical way, or experimentations. Nowadays a number of quantitative structure-property relationships (QSPRs) were developed in order to shorting the research and analysis time of chemical properties on classes of compounds. The ability of the molecular descriptor family (MDF) was used to produce QSPRs for estimating the adsorption onto activated carbon in water. A number of sixteen organics and theirs adsorption onto activated carbon in water serves for QSPRs obtaining. The MDF methodology include the three-dimensional model of the molecules building using the HyperChem software, MDF members generating using a set of Pre Hypertext Processor (PHP) programs, storing using a MySQL database server, and finally with a set of Delphi Multiple Linear Regression programs structure-property relationships findings. A number of 105319 MDF members enter into multiple linear regressions findings. Five from our best QSPRs are presented, one mono-varied, two bi-varied and two tri-varied models. The MDF QSPR methodology has big potential in finding QSPR models and is proved for adsorption onto activated carbon in water of studied organics.

The paper presents the main aspects and implementation of the classification system, database design and software, implementation of a multiple choice examination system for general chemistry in order to generate tests for student evaluation. The testing system was used to generate items for multiple choice examinations for first year undergraduate students in Material Engineering and Environmental Engineering from Technical University of Cluj-Napoca, Romania, which all attend the same General Chemistry course.

In recent years, the performance requirements for petroleum process plants have become increasingly difficult to satisfy. In order to understand, design and operate the complex systems in the petroleum industries at relatively low cost and with minimum risk, mathematical modelling becomes very useful. Thus, this paper proposes some developed mathematical models for a two-phase gas-liquid horizontal separator, which is valid with an accuracy of about 1.641% based on the liquid temperature values. Simulated temperature values were 300.24 K, 299.69 K and 299.14 K with the corresponding industrial values as 300.22 K, 299.67 K and 299.11 K respectively. Within the boundaries of the limitations stated, the model could be used to predict the operation of the separator at different operating conditions, to optimize the separator products and as a tool for further expansion amongst other uses.

Patterns and Pattern Languages are ways to capture experience and make it re-usable for others, and describe best practices and good designs. Patterns are solutions to recurrent problems.This paper addresses the database integrity problems from a pattern perspective. Even if the number of vendors of database management systems is quite high, the number of available solutions to integrity problems is limited. They all learned from the past experience applying the same solutions over and over again.The solutions to avoid integrity threats applied to in database management systems (DBMS) can be formalized as a pattern language. Constraints, transactions, locks, etc, are recurrent integrity solutions to integrity threats and therefore they should be treated accordingly, as patterns.

This paper presents solution of optimal power flow problem of large distribution systems via a simple genetic algorithm. The objective is to minimize the fuel cost and keep the power outputs of generators, bus voltages, shunt capacitors/reactors and transformers tap-setting in their secure limits. CPU times can be reduced by decomposing the optimization constraints to active constraints manipulated directly by the genetic algorithm, and passive constraints maintained in their soft limits using a conventional constraint load flow. The IEEE 30-bus system has been studied to show the effectiveness of the algorithm.

MotivationIn the present are many QSAR/QSPR models, based on varied considerations, from mathematical through topological and geometrical to 3D molecular geometry approaches.IdeaThe idea is to create a unitary approach, based on a minimal set of well-known truths, capable to generate an efficient model of property behavior depending on molecular structure.MethodFirst step in order to reach the proposed goal is to create a huge family of molecular descriptors starting from molecular structure as a graph, considering the bonds and bond types, atom types and a most probable 3D geometry of the molecule. More, using this family of molecular descriptors, a preliminary selection is done in simple linear regression with the measured property. The resulted set of valid descriptors serves for multivariate regressions in order to reach the best QSAR/QSPR model.Results The comparisons of the obtained results with other models shows that the proposed model of Molecular Descriptors Family is superior to most of the all other models.AdvantagesThe model is dependent only of the microscopic molecular structure and it can be applies at any macroscopic molecular property.For a given molecular structure or set of structures, is necessary only one calculation of the descriptors, and can be applies to more than one measured property without changes. In other words, the MDF of a molecular structure is a molecular invariant.DisadvantagesBecause the set of molecular descriptors are huge (787968 computed values), the processing time of the model finding is time consuming.ConclusionConsidering the obtained results, advantages and disadvantages and also the trend of computing performances, the MDF method promise a great expansion of using.

A large amount of pharmaceutical information exists and new knowledge’s are created every day. Organizing pharmaceutical information is an important task because a correct organization of information allows searching and finding information quickly. The United States National Library of Medicine has already included into Medical Subjects Headings a ‘Chemicals and Drugs’ chapter and a number of countries translated already the indexes into own medical subjects classifications. The aim of the paper is a proposal of a relational database structure and Visual FoxPro implementation for organization of pharmaceutical standardised information using key terms, a very useful thinks for each country that need to organize pharmaceutical information.

Evaluation of the strength of association between predisposing or causal factors and disease can be express as odds ratio in case-control studies. In order to interpret correctly a point estimation of odds ratio we need to look also to its confidence intervals quality. The aim of this paper is to introduce three new methods of computing the confidence intervals, R2AC, R2Binomial, and R2BinomialC, and compare the performances with the asymptotic method called R2Wald.In order to assess the methods a PHP program was develop. First, the upper and lower confidence boundaries for all implemented methods were computes and graphically represented. Second, the experimental errors, standard deviations of the experimental errors and deviation relative to the imposed significance level α = 5% were assessed. Estimating the experimental errors and standard deviations at central point for given sample sizes was the third criterion. The R2Wald and R2AC methods were assessed using random binomial variables (X, Y) and sample sizes (m, n) from 4 to 1000. The methods based on the original method Binomial adjusted for odds ratio (R2Binomial, R2BinomialC functions) obtain systematically the lowest deviation of the experimental errors percent relative to the expected error percent and the R2AC method, the closest average of the experimental errors percent to the expected error percent.

Due to their effectiveness in the design and development of software applications and due to their recognized advantages in terms of reusability, Component-Based Software Engineering (CBSE) concepts have been arousing a great deal of interest in recent years. This paper presents and extends a component-based approach to object-oriented database systems (OODB) introduced by us in [1] and [2]. Components are proposed as a new abstraction level for database system, logical partitions of the schema. In this context, the scope is introduced as an escalated property for transactions. Components are studied from the integrity, consistency, and concurrency control perspective. The main benefits of our proposed component model for OODB are the reusability of the database design, including the access statistics required for a proper query optimization, and a smooth information exchange. The integration of crosscutting concerns into the component database model using aspect-oriented techniques is also discussed. One of the main goals is to define a method for the assessment of component composition capabilities. These capabilities are restricted by the component’s interface and measured in terms of adaptability, degree of compose-ability and acceptability level. The above-mentioned metrics are extended from database components to generic software components. This paper extends and consolidates into one common view the ideas previously presented by us in [1, 2, 3].[1] Octavian Paul Rotaru, Marian Dobre, Component Aspects in Object Oriented Databases, Proceedings of the International Conference on Software Engineering Research and Practice (SERP’04), Volume II, ISBN 1-932415-29-7, pages 719-725, Las Vegas, NV, USA, June 2004.[2] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Integrity and Consistency Aspects in Component-Oriented Databases, Proceedings of the International Symposium on Innovation in Information and Communication Technology (ISIICT’04), pages 131-137, Amman, Jordan, April 2004.[3] Octavian Paul Rotaru, Marian Dobre, Mircea Petrescu, Reusability Metrics for Software Components, to appear in the Proceedings of the 3rd ACS / IEEE International Conference on Computer Systems and Applications (AICCSA-05), Cairo, Egypt, January 2005.