일정

Regularized estimation in supervised learning

기간 : 2022-12-16 ~ 2022-12-16
시간 : 15:50 ~ 18:00
개최 장소 : Math Bldg 404
개요
Math Colloquium Seminar
분야Field
날짜Date 2022-12-16 ~ 2022-12-16 시간Time 15:50 ~ 18:00
장소Place Math Bldg 404 초청자Host
연사Speaker 신선영 소속Affiliation POSTECH
TOPIC Regularized estimation in supervised learning
소개 및 안내사항Content

Regularized estimation in supervised learning

 

Abstract: Supervised learning is an effective machine learning technique to harness the power of big data, where certain variables are used to predict changes in a response variable. A supervised learning model, such as regression and classification, is formulated by minimizing a loss function that associates the response with the predictor variables for goodness of fit. Regularization methods are often used to facilitate variable selection and boost the interpretability of the models. 

 

The first part of the talk introduces regularization technique for estimation of various parametric models. The objective function employed for the regularization methods is in the form of the loss function plus a penalty function for complexity control. By shrinking the coefficient estimates towards zero, the technique achieves effective variable selection and accurate parameter estimation. The second part of the talk presents two methods of regularized estimation for semiparametric regression models that has parametric and nonparametric components. The first scenario we consider has the likelihood function for a semiparametric regression model that factors into separate components, with an efficient estimator of the regression parameter available for each component. For the second scenario, we establish a semiparametric framework for meta-analysis that uses summary statistics from multiple studies to learn a single random system, allowing for the studies to have a different observed data type.

학회명Field Regularized estimation in supervised learning
날짜Date 2022-12-16 ~ 2022-12-16 시간Time 15:50 ~ 18:00
장소Place Math Bldg 404 초청자Host
소개 및 안내사항Content

Regularized estimation in supervised learning

 

Abstract: Supervised learning is an effective machine learning technique to harness the power of big data, where certain variables are used to predict changes in a response variable. A supervised learning model, such as regression and classification, is formulated by minimizing a loss function that associates the response with the predictor variables for goodness of fit. Regularization methods are often used to facilitate variable selection and boost the interpretability of the models. 

 

The first part of the talk introduces regularization technique for estimation of various parametric models. The objective function employed for the regularization methods is in the form of the loss function plus a penalty function for complexity control. By shrinking the coefficient estimates towards zero, the technique achieves effective variable selection and accurate parameter estimation. The second part of the talk presents two methods of regularized estimation for semiparametric regression models that has parametric and nonparametric components. The first scenario we consider has the likelihood function for a semiparametric regression model that factors into separate components, with an efficient estimator of the regression parameter available for each component. For the second scenario, we establish a semiparametric framework for meta-analysis that uses summary statistics from multiple studies to learn a single random system, allowing for the studies to have a different observed data type.

성명Field Regularized estimation in supervised learning
날짜Date 2022-12-16 ~ 2022-12-16 시간Time 15:50 ~ 18:00
소속Affiliation POSTECH 초청자Host
소개 및 안내사항Content

Regularized estimation in supervised learning

 

Abstract: Supervised learning is an effective machine learning technique to harness the power of big data, where certain variables are used to predict changes in a response variable. A supervised learning model, such as regression and classification, is formulated by minimizing a loss function that associates the response with the predictor variables for goodness of fit. Regularization methods are often used to facilitate variable selection and boost the interpretability of the models. 

 

The first part of the talk introduces regularization technique for estimation of various parametric models. The objective function employed for the regularization methods is in the form of the loss function plus a penalty function for complexity control. By shrinking the coefficient estimates towards zero, the technique achieves effective variable selection and accurate parameter estimation. The second part of the talk presents two methods of regularized estimation for semiparametric regression models that has parametric and nonparametric components. The first scenario we consider has the likelihood function for a semiparametric regression model that factors into separate components, with an efficient estimator of the regression parameter available for each component. For the second scenario, we establish a semiparametric framework for meta-analysis that uses summary statistics from multiple studies to learn a single random system, allowing for the studies to have a different observed data type.

성명Field Regularized estimation in supervised learning
날짜Date 2022-12-16 ~ 2022-12-16 시간Time 15:50 ~ 18:00
호실Host 인원수Affiliation 신선영
사용목적Affiliation 신청방식Host POSTECH
소개 및 안내사항Content

Regularized estimation in supervised learning

 

Abstract: Supervised learning is an effective machine learning technique to harness the power of big data, where certain variables are used to predict changes in a response variable. A supervised learning model, such as regression and classification, is formulated by minimizing a loss function that associates the response with the predictor variables for goodness of fit. Regularization methods are often used to facilitate variable selection and boost the interpretability of the models. 

 

The first part of the talk introduces regularization technique for estimation of various parametric models. The objective function employed for the regularization methods is in the form of the loss function plus a penalty function for complexity control. By shrinking the coefficient estimates towards zero, the technique achieves effective variable selection and accurate parameter estimation. The second part of the talk presents two methods of regularized estimation for semiparametric regression models that has parametric and nonparametric components. The first scenario we consider has the likelihood function for a semiparametric regression model that factors into separate components, with an efficient estimator of the regression parameter available for each component. For the second scenario, we establish a semiparametric framework for meta-analysis that uses summary statistics from multiple studies to learn a single random system, allowing for the studies to have a different observed data type.

Admin Admin · 2022-09-13 13:37 · 조회 401
kartal escort maltepe escort