Interactive Visualization of Limits and Discontinuities
Drake Austin — Computer Science
Background
Artificial Intelligence is increasingly used in hiring processes, particularly in automated resume screening systems. While these systems improve efficiency, they raise concerns about algorithmic bias.
Bias can arise from historical data that reflects societal inequalities. This project explores how bias manifests and how it can be mitigated using fairness-aware techniques.
Methodology
- Collect and preprocess datasets (including synthetic data)
- Develop baseline ML model (Scikit-learn / TensorFlow)
- Evaluate using fairness metrics:
- Demographic parity
- Equal opportunity
- Apply mitigation strategies:
- Rebalancing data
- Removing sensitive features
- Fairness-aware algorithms
- Compare performance before and after mitigation
Anticipated Outcomes
- Functional AI resume screening prototype
- Analysis of bias and mitigation effectiveness
- Trade-off evaluation between fairness and accuracy
- Recommendations for ethical AI design
Significance
This project contributes to ethical AI by addressing fairness in automated decision-making systems. It provides insights for:
- Developers
- Researchers
- Policymakers
- Industry professionals
Timeline
| Date | Task |
|---|---|
| Jan 2027 | Literature review, define scope |
| Feb 2027 | Data collection & preprocessing |
| Mar 2027 | Model design & development |
| Apr 2027 | Complete baseline model |
| May 2027 | Bias evaluation |
| Jun 2027 | Implement mitigation |
| Jul 2027 | Re-evaluate model |
| Aug 2027 | Analyze results |
| Sep 2027 | Final report & presentation |
Budget
| Item | Cost |
|---|---|
| Cloud GPU Credits | $150 |
| ML Tools | $50 |
| Participant Incentives | $200 |
| Paid Datasets | $100 |
| Miscellaneous | $100 |
| Total | $600 |
References
- Mehrabi et al., Bias and Fairness in ML, 2021
- Wachter et al., Bias Preservation in ML, 2021
- Yang et al., Fairness in Clinical ML, 2023
- ScienceDirect (2022)
- NCBI (2021)