강연 / 세미나

세미나
세미나
일정

MINDS Seminar on Data ScienceㅣProminent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms

기간 : 2023-05-09 ~ 2023-05-09
시간 : 10:00 ~ 11:00
분야Field
날짜Date 2023-05-09 ~ 2023-05-09 시간Time 10:00 ~ 11:00
장소Place 초청자Host
연사Speaker Wooseok Ha 소속Affiliation Amazon Web Services
TOPIC MINDS Seminar on Data ScienceㅣProminent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms
소개 및 안내사항Content

https://us06web.zoom.us/j/6888961076?pwd=ejYxN05jNmhUa25PU2JzSUJvQ1haQT09

ID : 688 896 1076 / PW : 54321

 

Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to test the model. While many DA algorithms have demonstrated considerable empirical success, the unavailability of target labels in DA makes it challenging to determine their effectiveness in new datasets without a theoretical basis. Therefore, it is essential to clarify the assumptions required for successful DA algorithms and quantify the corresponding theoretical guarantees under these assumptions. In this work, we focus on the assumption that conditionally invariant components (CICs) useful for prediction exist across the source and target data. Under this assumption, we demonstrate that CICs, which can be estimated through conditional invariant penalty (CIP), play three prominent roles in providing theoretical guarantees for DA algorithms. First, we introduce a new CIC-based algorithm called importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs can be used to identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm helps to address its well-known failure scenario caused by label-flipping features. We support our new algorithms and theoretical findings via numerical experiments on synthetic data, MNIST, CelebA, and Camelyon17 datasets.
학회명Field MINDS Seminar on Data ScienceㅣProminent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms
날짜Date 2023-05-09 ~ 2023-05-09 시간Time 10:00 ~ 11:00
장소Place 초청자Host
소개 및 안내사항Content

https://us06web.zoom.us/j/6888961076?pwd=ejYxN05jNmhUa25PU2JzSUJvQ1haQT09

ID : 688 896 1076 / PW : 54321

 

Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to test the model. While many DA algorithms have demonstrated considerable empirical success, the unavailability of target labels in DA makes it challenging to determine their effectiveness in new datasets without a theoretical basis. Therefore, it is essential to clarify the assumptions required for successful DA algorithms and quantify the corresponding theoretical guarantees under these assumptions. In this work, we focus on the assumption that conditionally invariant components (CICs) useful for prediction exist across the source and target data. Under this assumption, we demonstrate that CICs, which can be estimated through conditional invariant penalty (CIP), play three prominent roles in providing theoretical guarantees for DA algorithms. First, we introduce a new CIC-based algorithm called importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs can be used to identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm helps to address its well-known failure scenario caused by label-flipping features. We support our new algorithms and theoretical findings via numerical experiments on synthetic data, MNIST, CelebA, and Camelyon17 datasets.
성명Field MINDS Seminar on Data ScienceㅣProminent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms
날짜Date 2023-05-09 ~ 2023-05-09 시간Time 10:00 ~ 11:00
소속Affiliation Amazon Web Services 초청자Host
소개 및 안내사항Content

https://us06web.zoom.us/j/6888961076?pwd=ejYxN05jNmhUa25PU2JzSUJvQ1haQT09

ID : 688 896 1076 / PW : 54321

 

Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to test the model. While many DA algorithms have demonstrated considerable empirical success, the unavailability of target labels in DA makes it challenging to determine their effectiveness in new datasets without a theoretical basis. Therefore, it is essential to clarify the assumptions required for successful DA algorithms and quantify the corresponding theoretical guarantees under these assumptions. In this work, we focus on the assumption that conditionally invariant components (CICs) useful for prediction exist across the source and target data. Under this assumption, we demonstrate that CICs, which can be estimated through conditional invariant penalty (CIP), play three prominent roles in providing theoretical guarantees for DA algorithms. First, we introduce a new CIC-based algorithm called importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs can be used to identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm helps to address its well-known failure scenario caused by label-flipping features. We support our new algorithms and theoretical findings via numerical experiments on synthetic data, MNIST, CelebA, and Camelyon17 datasets.
성명Field MINDS Seminar on Data ScienceㅣProminent Roles of Conditionally Invariant Components in Domain Adaptation: Theory and Algorithms
날짜Date 2023-05-09 ~ 2023-05-09 시간Time 10:00 ~ 11:00
호실Host 인원수Affiliation Wooseok Ha
사용목적Affiliation 신청방식Host Amazon Web Services
소개 및 안내사항Content

https://us06web.zoom.us/j/6888961076?pwd=ejYxN05jNmhUa25PU2JzSUJvQ1haQT09

ID : 688 896 1076 / PW : 54321

 

Domain adaptation (DA) is a statistical learning problem that arises when the distribution of the source data used to train a model differs from that of the target data used to test the model. While many DA algorithms have demonstrated considerable empirical success, the unavailability of target labels in DA makes it challenging to determine their effectiveness in new datasets without a theoretical basis. Therefore, it is essential to clarify the assumptions required for successful DA algorithms and quantify the corresponding theoretical guarantees under these assumptions. In this work, we focus on the assumption that conditionally invariant components (CICs) useful for prediction exist across the source and target data. Under this assumption, we demonstrate that CICs, which can be estimated through conditional invariant penalty (CIP), play three prominent roles in providing theoretical guarantees for DA algorithms. First, we introduce a new CIC-based algorithm called importance-weighted conditional invariant penalty (IW-CIP), which has target risk guarantees beyond simple settings such as covariate shift and label shift. Second, we show that CICs can be used to identify large discrepancies between source and target risks of other DA algorithms. Finally, we demonstrate that incorporating CICs into the domain invariant projection (DIP) algorithm helps to address its well-known failure scenario caused by label-flipping features. We support our new algorithms and theoretical findings via numerical experiments on synthetic data, MNIST, CelebA, and Camelyon17 datasets.
Admin Admin · 2023-05-04 16:12 · 조회 297
2017년 이전 세미나
kartal escort maltepe escort