Accelerate Data Science on GPUs

  1. What is the primary benefit of using cudf.pandas in your data science workflow?

  2. Which runtime environment on Google Cloud is recommended for teams that want fast setup, ease of use, and secure collaboration for data science workflows with Notebooks?

  3. In the context of GPU-accelerated machine learning, what does running the cuml.accel profilers help you identify?

  4. Which of the following is a best practice when using GPU acceleration for machine learning?

  5. What parameter should you set in XGBoost to enable GPU acceleration?

  6. What is the key advantage of using cuML's estimators (e.g., cuml.ensemble.RandomForestClassifier) over their scikit-learn equivalents?

  7. When using GPU-accelerated XGBoost with cuDF DataFrames, what is a key benefit of passing cuDF data directly to xgb.train() or XGBClassifier.fit()?

  8. What does the cuml.accel module allow you to do when working with scikit-learn code?

  9. Both cuml.accel.profile and cuml.accel.line_profile are ways to profile your GPU-accelerated scikit-learn code. Which of the following are valid ways to invoke them?