Friday, April 18, 2025

AI-Driven Despair Meets Machine Learning Prevention

 

AI-Driven Despair Meets Machine Learning Prevention

Disclaimer from the Author: 

This article is a study and a reflection of my perspective, formulated from various frameworks and best practices I have encountered in my academic and professional journey. The examples and figures presented are conceptual and should be treated as guiding principles, not as real-world scenarios or validated data.

Readers are advised to use the content herein as a reference for exploring ideas and strategies, not as a definitive source of operational frameworks or policy implementation. While the insights aim to inspire critical thinking and understanding, they are not grounded in empirical research or official government practices.

Users should exercise discretion and seek further research or professional guidance when applying these principles to real-life situations.

Converging Dangers: AI Algorithms vs Adolescent Mental Health

The tragic case of Molly Russell and the growing evidence of algorithm-induced despair in young users highlights the risks of advanced AI when used without ethical boundaries. Simultaneously, research by Kim et al. (2023) offers a compelling counter-narrative: AI, when ethically deployed, can predict and potentially prevent adolescent suicidal ideation with high accuracy.

In both cases, machine learning is the core technology—yet its impact depends entirely on the intent behind its use.


📊 Key Findings from Kim et al. (2023):

FactorImpact on Suicidal Thinking
Sadness & Despair57.4%
Stress Status19.8%
Age5.7%
Low Household Income4.0%
Poor Academic Achievement3.4%
Sex2.1%

👉 These factors mirror many of the emotional and psychosocial triggers exploited by AI-curated content feeds on platforms like TikTok and Instagram.


🇵🇭 Filipino Context: Algorithmic Risk Amplified

In the Philippines:

  • Youth spend over 3.5 hours daily on social media.

  • Mental health stigma and limited access to professionals compound the issue.

  • Platforms rarely moderate depression-related content in Tagalog or regional dialects.

🧠 Sadness and despair, the top ML-predicted suicidal indicators, are also the primary emotions reinforced by AI algorithms that favor engagement over well-being.

In effect, one type of AI (social media) fuels the crisis, while another (ML diagnostics) tries to stop it.


📌 Real-World Implications:

  1. From Echo Chambers to Safe Zones

    • Problem: Social media AIs reinforce depressive content.

    • Solution: Integrate machine learning models like XGBoost in mental health apps or school guidance systems to flag at-risk students.

  2. Predictive Risk Profiling in Filipino Schools

    • With localized data, a DepEd initiative could replicate Kim et al.’s model to identify early signs of suicidal ideation among Filipino adolescents.

  3. Ethical Use of AI

    • Social media platforms must incorporate ethical flags based on ML risk indicators (e.g., sadness, despair) to intervene or redirect content.



🧩 Integration: Risk Management Strategy

AI Use CasePositive PotentialNegative Risk
Predictive ML (e.g., XGBoost)Early intervention for suicide preventionData privacy concerns
Engagement AlgorithmsKeeps users activeReinforces despair, self-harm loops
SHAP Value InterpretabilityTransparent diagnosticsMisuse in surveillance if unregulated

🛡️ Recommendation for the Philippines

  • Create a National ML-Based Mental Health Risk Index for adolescents in public and private schools.

  • Mandate Ethical AI Standards for social media platforms operating in the country.

  • Public-Private Collaboration: Integrate DOH, DepEd, and tech partners to build data-sharing protocols for proactive intervention.


📘 Conclusion

Kim et al.’s study proves that machine learning can become a powerful ally in saving young lives—but only when wielded with intentional compassion and ethical foresight. The case of Molly Russell and the Philippine social media landscape both serve as wake-up calls: AI isn’t inherently good or bad. Its morality lies in how we use it—and who we choose to protect.


Reference:

  • Milmo, D. (2022, October 1). ‘The bleakest of worlds’: how Molly Russell fell into a vortex of despair on social media. The Guardian.
    https://www.theguardian.com/technology/2022/sep/30/how-molly-russell-fell-into-a-vortex-of-despair-on-social-media

  • Kim, H., Son, Y., Lee, H., Kang, J., Hammoodi, A., Choi, Y., Kim, H. J., Lee, H., Fond, G., Boyer, L., Kwon, R., Woo, S., & Yon, D. K. (2023). Machine learning–based prediction of suicidal thinking in adolescents: Derivation and validation in three independent worldwide cohorts in South Korea, Norway, and the USA.
    Frontiers in Psychiatry, 14.
    https://doi.org/10.3389/fpsyt.2023.1156117

  • Department of Education (DepEd), Philippines. (2020). State of Adolescent Mental Health in Schools. Retrieved from https://www.deped.gov.ph

  • Statista Research Department. (2023). Average daily time spent on social media by internet users in the Philippines.
    https://www.statista.com/statistics/1064546/philippines-daily-time-spent-on-social-media/

  • World Health Organization. (2021). Mental health of adolescents.
    https://www.who.int/news-room/fact-sheets/detail/adolescent-mental-health



  • No comments:

    Post a Comment

    HR on the Edge: The Risk of Ignoring the Rise of the ‘Polygamous Worker’ in the Philippines

    In the Philippines, there was a time when a steady job was the ultimate goal. For decades, workers like Tito Ramil gave their li...