Search published articles


Showing 11 results for Regression

Abbas Saghaei, Maryam Rezazadeh-Saghaei, Rasoul Noorossana, Mehdi Doori,
Volume 23, Issue 4 (11-2012)
Abstract

In many industrial and non-industrial applications the quality of a process or product is characterized by a relationship between a response variable and one or more explanatory variables. This relationship is referred to as profile. In the past decade, profile monitoring has been extensively studied under the normal response variable, but it has paid a little attention to the profile with the non-normal response variable. In this paper, the focus is especially on the binary response followed by the bernoulli distribution due to its application in many fields of science and engineering. Some methods have been suggested to monitor such profiles in phase I, the modeling phase however, no method has been proposed for monitoring them in phase II, the detecting phase. In this paper, two methods are proposed for phase II logistic profile monitoring. The first method is a combination of two exponentially weighted moving average (EWMA) control charts for mean and variance monitoring of the residuals defined in logistic regression models and the second method is a multivariate T2 chart to monitor model parameters. The simulation study is done to investigate the performance of the methods.
Rassoul Noorossana, Abbas Saghaei, Hamidreza Izadbakhsh, Omid Aghababaei,
Volume 24, Issue 2 (6-2013)
Abstract

In certain statistical process control applications, quality of a process or product can be characterized by a function commonly referred to as profile. Some of the potential applications of profile monitoring are cases where quality characteristic of interest is modelled using binary,multinomial or ordinal variables. In this paper, profiles with multinomial response are studied. For this purpose, multinomial logit regression (MLR) is considered as the basis.Then, the MLR is converted to Poisson GLM with log link. Two methods including Multivariate exponentially weighted moving average (MEWMA) statistics, and Likelihood ratio test (LRT) statistics are proposed to monitor MLR profiles in phase II. Performances of these three methods are evaluated by average run length criterion (ARL). A case study from alloy fasteners manufacturing process is used to illustrate the implementation of the proposed approach. Results indicate satisfactory performance for the proposed method.
Taha Hosseinhejazi, Majid Ramezani, Mirmehdi Seyyed-Esfahani, Ali Mohammad Kimiagari,
Volume 24, Issue 2 (6-2013)
Abstract

control of production processes in an industrial environment needs the correct setting of input factors, so that output products with desirable characteristics will be resulted at minimum cost. Moreover, such systems havetomeetset of qualitycharacteristicstosatisfycustomer requirements.Identifyingthemosteffectivefactorsindesignoftheprocesswhichsupportcontinuousandcontinualimprovement isrecentlydiscussedfromdifferentviewpoints.Inthisstudy, we examined the quality engineering problems in which several characteristics and factors are to be analyzed through a simultaneous equations system. Besides, the several probabilistic covariates can be included to the proposed model. The main purpose of this model is to identify interrelations among exogenous and endogenous variables, which give important insight for systematic improvements of quality. At the end, the proposed approach is described analytically by a numerical example.
Rassoul Noorossana, M. Nikoo,
Volume 26, Issue 2 (7-2015)
Abstract

In many manufacturing processes, the quality of a product is characterized by a non-linear relationship between a dependent variable and one or more independent variables. Using nonlinear regression for monitoring nonlinear profiles have been proposed in the literature of profile monitoring which is faced with two problems 1) the distribution of regression coefficients in small samples is unknown and 2) with the increasing complexity of process, regression parameters will increase and thereby the efficiency of control charts decreases. In this paper, wavelet transform is used in Phase II for monitoring nonlinear profiles. In wavelets transform, two parameters specify the smoothing level, the first one is threshold and the second one is decomposition level of regression function form. First, using the adjusted coefficient of determination, decomposition level is specified and then process performance is monitored using the mean of wavelet coefficients and profile variance. The efficiency of the proposed control charts based on the average run length (ARL) criterion for real data is compared with the existing control charts for monitoring nonlinear profiles in Phase II

\"AWT


Roghaye Hemmatjou, Nasim Nahavandi, Behzad Moshiri,
Volume 27, Issue 3 (9-2016)
Abstract

In most of the multi–criteria decision–analysis (MCDA) problems in which the Choquet integral is used as aggregation function, the coefficients of Choquet integral (capacity) are not known in advance. Actually, they could be calculated by capacity definition methods. In these methods, the preference information of decision maker (DM) is used to constitute a possible solution space. The methods which are based on optimizing an objective function most often suffer from three drawbacks. Firstly, the selection of the ultimate solution from solution set is arbitrarily done. Secondly, the solution may provide more information than whatever proposed by DM. Thirdly, DM may not fully interpret the results. Robust capacity definition methods are proposed to overcome these kinds of drawbacks, on the other hand these methods do not consider evenness (uniformity) which is a major property of capacity. Since in capacity definition methods, the preference information on only a subset of alternatives called reference alternatives, is used, defining the capacity as uniform as possible could improve its capability in evaluating non–reference alternatives. This paper proposes an algorithm to define a capacity that is based only on the preference information of DM and consequently is representative. Furthermore, it improves evenness of capacity and consequently its reliability in evaluating non–reference alternatives. The algorithm is used to evaluate power plant projects. Power plant projects are of the most important national projects in Iran and a major portion of national capital is invested on them, so these projects should be scientifically evaluated in order to figure out their performance. Case–specific criteria are considered in addition to general criteria used in project performance evaluation. The evaluation results obtained from proposed algorithm are compared with those of the most representative utility function method.


Rassoul Noorossana, Somayeh Khalili,
Volume 32, Issue 1 (1-2021)
Abstract

In the last few decades, profile monitoring in univariate and multivariate environment has drawn a considerable attention in the area of statistical process control. In multivariate profile monitoring, it is required to relate more than one response variable to one or more explanatory variables. In this paper, the multivariate multiple linear profile monitoring problem is addressed under the assumption of existing autocorrelation among observations. Multivariate linear mixed model (MLMM) is proposed to account for the autocorrelation between profiles. Then two control charts in addition to a combined method are applied to monitor the profiles in phase II. Finally, the performance of the presented method is assessed in terms of average run length (ARL). The simulation results demonstrate that the proposed control charts have appropriate performance in signaling out-of-control conditions.
 
Marwa El-Mahalawy, M. Samuel, N. Fouda, Sara El-Bahloul,
Volume 32, Issue 2 (6-2021)
Abstract

Abstract: Wire Electrical Discharge Machining (WEDM) is a non-traditional thermal machining process used to manufacture irregularly profiled parts. Machining of ductile cast iron (ASTM A536) under several machining factors, which affect the WEDM process, is presented. The considered machining factors are pulse on time (Ton), pulse off (Toff), peak current (Ip), voltage (V), and wire speed (S). To optimize the machining factors, their setting is performed via an experimental design using the Taguchi method. The optimization objective is to achieve maximum Material Removal Rate (MRR) and minimum Surface Roughness (SR). Additionally, the analysis of variance (ANOVA) is used to identify the most significant factor. Also, a regression analysis is carried out to forecast the MRR and SR dependent on defined machining factors. Depending on consequences, the best regulation factors for reaching the maximum MRR are Ton = 32 μs, Toff = 8 μs, Ip = 4 A, S = 40 mm/min. and V = 70 volt. Whereas, the optimal control factors that achieve the minimum SR is Ton = 8 μs, Toff = 8 μs, Ip = 2 A, S = 20 mm/min, and V= 30 volt. It is hypothesized that the perfect combination of control factors that achieves minimum SR and maximum MRR is Ton = 8 μs, Toff = 8 μs, Ip=5 A, S=50 mm/min. The microstructure of the machined surface in the optimal machining conditions shows a very narrow recast layer at the top of the machined surface.
M Kaladhar, Vss Sameer Chakravarthy, Psr Chowdary,
Volume 32, Issue 3 (9-2021)
Abstract

Surface quality is a technical prerequisite in the field of manufacturing industries and can be treated as a quality index for machined parts. Attainment of appropriate surface finish plays a key role during functional performance of machined part. It is typically influenced by the machining parameters. Consequently, enumerating the good relation between surface roughness (Ra) and machining parameters is a highly focused task. In the current work, response surface methodology (RSM) based regression models and flower pollination algorithm (FPA) based sparse data model were developed to predict the minimum value of surface roughness in hard turning of AISI 4340 steel (35 HRC) using a single nanolayer of TiSiN-TiAlN PVD-coated cutting insert. The results obtained from this approach had good harmony with experimental results, as the standard deviation of the estimated values was simply 0.0804 (for whole) and 0.0289 (for below 1 µm Ra). When compared with RSM models, the proposed FPA based model showed the least percentage of mean absolute error. The model obtained the strongest correlation coefficient value of 99.75% among the other models values. The behavior of machining parameters and its interaction against surface roughness in the developed models were discussed with Pareto chart. It was observed that the feed rate was highly significant parameter in swaying machining surface roughness. In inference, the FPA sparse data model is a better choice over the RSM based regression models for prognosis of surface roughness in hard turning of AISI 4340 steel (35 HRC). The model developed using FPA based sparse data for surface roughness during hard turning operation in the current work is not reported to the best of author’s knowledge. This model disclosed a more dependable estimation over the multiple regression models.
Yulial Hikmah, Vindaniar Yuristamanda, Ira Rosianal Hikmah, Karin Amelia Safitri,
Volume 33, Issue 2 (6-2022)
Abstract

Flood is a serious problem that can occur in many countries in the world. For tropical countries such as Indonesia, flooding is generally caused by rainfall that is high above normal. Almost all cities in Indonesia experience flooding every year, including DKI Jakarta, the capital city of Indonesia. Based on data from the National Disaster Management Agency (BNPB) in 2020, East Jakarta is a city that is prone to flooding. Considering that there are so many losses caused by flooding, it is necessary to have a disaster mitigation effort to minimize the possible risk of flooding. One of the risk mitigations due to natural disasters is to buy insurance products. However, not all people buy flood-impacted insurance products because of their economic and social factors. This research aims to create a model with Probit Regression Model to determine the factors that influence Indonesian's interest to buy flood-impacted insurance products. Furthermore, this study conducts a test. The results show that from the 19 factors used, eight factors significantly affect Indonesia's interest in purchasing flood-impacted insurance products. In the end, this research calculates the level of model accuracy and obtained 84.3%.
 
Mohammad Yaseliani, Majid Khedmati,
Volume 34, Issue 1 (3-2023)
Abstract

Diagnosis of diseases is a critical problem that can help for more accurate decision-making regarding the patients’ health and required treatments. Machine learning is a solution to detect and understand the symptoms related to heart disease. In this paper, a logistic regression model is proposed to predict heart disease based on a dataset with 299 people and 13 variables and to evaluate the impact of different predictors on the outcome. In this regard, at first, the effect of each predictor on the precise prediction of the outcome has been evaluated and analyzed by statistical measurements such as AIC scores and p-values. The logit models of different predictors have also been analyzed and compared to select the predictors with the highest impact on heart disease. Then, the combined model that best fits the dataset has been determined using two statistical approaches. Based on the results, the proposed model predicts heart disease with a sensitivity and specificity of 84.21% and 90.38%, respectively. Finally, using normal probability density curves, the likelihood ratios have been established based on classes 1 and 0. The results show that the likelihood ratio classifier performs as satisfactorily as the logistic regression model.
Muhammad Asim Siddique,
Volume 34, Issue 4 (12-2023)
Abstract

 
Software testing is the process of assessing the functionality of a software program. The software testing process checks for inaccuracies, gaps and whether the application outcome matches desired expectations before the software is installed and goes into production. Normally in large organizations, the development team allocates a high portion of estimated development time, cost and efficiency for regression testing to assure software testing quality assurance. The quality of developed software relies upon three factors time, efficiency and testing technique used for regression testing. Regression testing is an important component of software testing and maintenance, taking up a significant share of the total testing time, efficiency and resources organizations use in testing techniques. The key to successful regression testing using Test Case Prioritization (TCP), Test case Selection (TCS) and Test Case Minimization (TCM) is maximizing the test cases' effectiveness while considering the limited resources available. Regression testing introduced numerous techniques for (TCP, TCS, TCM) to maximize the efficiency based on Average Percentage Fault Detection (APFD). In recent studies, the TCP and TCS techniques can give the highest APFD score. However, each TCP and TCS approacshow limitations, such as high execution cost, time, efficiency, and lack of information. TCP and TCS approaches that can cover multiple test suite variables (time, cost, efficiency) remained inefficient. Thus, there is a need for a hybrid TCP and TCS technique to be developed to search for the best method that gives a high APFD score while having good coverage of test cases relevant to the cost and execution time to improve efficiency. The proposed hybrid test case selection and prioritization technique will exclude similar & faulty test cases to reduce test size. The proposed hybrid technique has several advantages, including reduced execution time and improved fault detection ability. The proposed hybrid Enhanced Test Case Selection and Prioritization Algorithm ( ETCSA) is a promising approach to select only modified test suites to improve efficiency. However, the efficiency of the proposed technique may depend on the specific criteria for selecting only modified test cases and the software's characteristics. The hybrid technique aims to define an ideal ranking order of test cases, allowing for higher coverage and early fault detection with reduced test suite size. This study reviews TCP and TCS hybrid techniques to reduce testing time, cost and improve efficiency for regression testing. Each TCS and TCP technique in regression testing has identified apparent standards, benefits, and restrictions.


Page 1 from 1