Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Mengevaluasi model machine learning (ML) secara bertanggung jawab membutuhkan lebih dari sekadar
menghitung metrik kerugian secara keseluruhan. Sebelum model diproduksi,
sangat penting untuk mengaudit data pelatihan dan mengevaluasi prediksi untuk
bias.
Modul ini melihat berbagai jenis bias manusia yang dapat muncul dalam data pelatihan. Kemudian, modul ini memberikan strategi untuk mengidentifikasi dan memitigasi bias tersebut, lalu mengevaluasi performa model dengan mempertimbangkan keadilan.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Informasi yang saya butuhkan tidak ada","missingTheInformationINeed","thumb-down"],["Terlalu rumit/langkahnya terlalu banyak","tooComplicatedTooManySteps","thumb-down"],["Sudah usang","outOfDate","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Masalah kode / contoh","samplesCodeIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-01-03 UTC."],[[["This module focuses on identifying and mitigating human biases that can negatively impact machine learning models."],["You'll learn how to proactively examine data for potential bias before model training and how to evaluate your model's predictions for fairness."],["The module explores various types of human biases that can unintentionally be replicated by machine learning algorithms, emphasizing responsible AI development."],["It builds upon foundational machine learning knowledge, including linear and logistic regression, classification, and handling numerical and categorical data."]]],[]]