Close Menu
    Trending
    • PatchMatch vs AI Inpainting — Why PatchMatch Still Excels at High Resolution | by Thuan Bui Huy | Aug, 2025
    • This company figured out how to reuse glass wine bottles, and it’s reshaping the Oregon wine industry
    • Retrieval‑Augmented Generation: Building Grounded AI for Enterprise Knowledge | by James Fahey | Aug, 2025
    • Tell Your Story and Share Your Strategies with the $49 Youbooks Tool
    • The Invisible Edge: Why Retail Traders Are Still Losing (and How AI Can Help) | by Neshanth Anand | Aug, 2025
    • Stop Duct-Taping Your Tech Stack Together: This All-in-One Tool Is Hundreds of Dollars Off
    • How Flawed Human Reasoning is Shaping Artificial Intelligence | by Manander Singh (MSD) | Aug, 2025
    • Exaone Ecosystem Expands With New AI Models
    AIBS News
    • Home
    • Artificial Intelligence
    • Machine Learning
    • AI Technology
    • Data Science
    • More
      • Technology
      • Business
    AIBS News
    Home»Machine Learning»Learn about AI-Related Risks and AI Risk Management | by Temitope Omosebi | Jul, 2025
    Machine Learning

    Learn about AI-Related Risks and AI Risk Management | by Temitope Omosebi | Jul, 2025

    Team_AIBS NewsBy Team_AIBS NewsJuly 4, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    AI-related risks, such as ethical, sustainability and technical risks.
    AI-related Dangers

    AI-related dangers confer with unplanned and sudden outcomes, with potential disruptive and destructive impacts, related to AI applied sciences or programs. AI-related dangers additionally describe the failure or essential malfunctioning of algorithms, optimisers, {hardware} and software program programs, and operations of an AI product.

    Steimers and Schneider present a complete background for understanding AI-related dangers by specifying the sources of AI dangers beneath two broad features: the moral facet, and the reliability and robustness facet (1).

    The moral facet has three sources of threat, together with equity, privateness and diploma of automation and management. The reliability and robustness facet specifies 5 sources of dangers, together with the complexity of the duty and utilization atmosphere, the diploma of transparency and explainability, safety, system {hardware} and expertise maturity.

    Nonetheless, Steimers and Schneider’s sources of AI-related dangers solely targeted on technical and moral dangers. The present literature has demonstrated the AI-related dangers embrace: social and authorized dangers (2), political and financial dangers (3), and environmental dangers (4). Past this drawback of AI-related threat classification, a research expresses that the manifestation and influence of AI-related dangers in numerous industries or sensible functions differ. Therefore, this text adopts a generic domain-based categorisation of AI-related dangers.

    There are 3 broad classes of AI-related dangers: sustainability dangers, technical dangers and moral dangers.

    1. The sustainability dangers associated to AI minimize throughout the normal categorisation of sustainability issues into financial, social and environmental domains (5). Financial AI-related threat class captures the disruptive potential of AI applied sciences on financial actions or monetary efficiency throughout totally different ranges, together with organisational, particular person and nationwide ranges. One of many main social dangers related to AI applied sciences is their propensity to advertise discriminatory practices in decision-making (2). AI applied sciences could be considerably biased or discriminatory of their data processing or decision-making if the info utilized in coaching the AI mannequin or algorithm is biased (6,7). The OECD highlights environmental sustainability issues associated to AI by noting the large computational sources and waste disposal concerned in working AI applied sciences (8). This fashion, AI applied sciences exert an awesome influence on the atmosphere, resulting in outcomes reminiscent of biodiversity degradation, water and air air pollution, excessive power consumption and soil contamination (8).

    2. The technical dangers in AI are rooted within the algorithms, fashions, software program packages and different technical processes or instruments that underpin the operations of the AI programs. This class of dangers impacts the reliability, trustworthiness, usability and vulnerabilities of AI applied sciences (1,9,10). Moreover, the premise of AI applied sciences, machine studying, is liable to a number of technical biases that may considerably alter or have an effect on its operations and effectiveness (11). These biases are important issues within the utility of AI applied sciences in lots of sectors as a result of errors are extremely doubtless and, because of the “black-box” character of AI selections, practitioners will likely be unable to detect or clarify the errors and reconfigure the AI system (10). Lastly, the introduction of AI applied sciences within the data-sensitive sectors heightens knowledge safety dangers, with potential knowledge breaches resulting in identification theft and privateness violations (12).

    3. The moral threat class is probably the most multifaceted threat area in AI applied sciences, because it usually interacts with authorized, reputational, and social dangers (13). Given this complexity, Douglas, AI moral threat is “any threat related to an AI that will trigger stakeholders to fail a number of of their moral duties in the direction of different stakeholders” (13).

    Utilizing the healthcare trade as our case research, the highest 5 AI dangers are: algorithmic bias and discrimination, knowledge privateness and safety breaches, regulatory non-compliance, job displacement and social backlash. A threat evaluation matrix desk summarising the chance, severity, penalties and response methods of the highest 5 AI dangers is offered within the desk beneath.

    High 5 AI-related Dangers

    What’s AI threat administration?

    Within the broad sense, AI threat administration refers back to the utility of threat administration methods (reminiscent of threat avoidance, mitigation, switch and acceptance) to AI-related dangers.

    This text identifies 2 main AI threat administration pointers: the Nationwide Institute of Requirements and Know-how’s (NIST) framework and the AI threat administration blueprint by Bogdanov and colleagues.

    1. The Nationwide Institute of Requirements and Know-how gives a complete threat administration framework for AI programs (14). The framework is designed to equip organisations and people on easy methods to mitigate AI dangers, design (socially, ethically and sustainability) accountable AI and enhance the trustworthiness of AI programs. Nonetheless, the Nationwide Institute of Requirements and Know-how (2023) itself notes that AI applied sciences are fast-evolving and, as such, threat administration frameworks for them want fixed improvement and growth.

    2. AI threat administration blueprint by Bogdanov and colleagues (15). The blueprint comprehensively highlights the foremost dangers AI programs face and the way every could be addressed at totally different phases of the AI improvement. By specializing in cybersecurity dangers, regulatory dangers and a few particular AI dangers (reminiscent of algorithmic, social and moral dangers), the blueprint affords an relevant threat administration course of.

    These present AI threat administration frameworks lack reflections on context-specific dynamics, human components in threat publicity and administration, and are at present unable to match the fast proliferation of AI applied sciences’ improvement and functions (16,17). Within the UK and US, the place the frameworks try to be context-specific, they result in inconsistencies throughout sectors (17). Therefore, additional developments are wanted within the area of AI threat administration.

    Hey there, I’m Temi, and I write actually good. I really like engaged on inventive and value-oriented initiatives in tutorial {and professional} contexts. I’m accessible to give you top-notch analysis and writing providers. You’ll be able to contact me by way of eomosebi@gmail.com.

    1. Steimers A, Schneider M. Sources of threat of ai programs. IJERPH. 2022 Mar 18;19(6):3641.

    2. Al-Tkhayneh KM, Al-Tarawneh HA, Abulibdeh E, Alomery MK. Social and authorized dangers of synthetic intelligence: an analytical research. Acad J Interdiscip Stud. 2023 Could 5;12(3):308.

    3. Carvalho JP. The political-economic dangers of AI [Internet]. 2025 [cited 2025 Jun 5]. Accessible from: https://www.ssrn.com/abstract=5137622

    4. Sepehr KN, Nilofar N. The environmental impacts of ai and digital applied sciences. aitechbesosci. 2023;1(4):11–8.

    5. James P, Magee L. Domains of sustainability. In: World Encyclopedia of Public Administration, Public Coverage, and Governance [Internet]. Springer, Cham; 2016 [cited 2025 Jun 5]. p. 1–17. Accessible from: https://link.springer.com/rwe/10.1007/978-3-319-31816-5_2760-1

    6. Livingston M. Stopping racial bias in federal AI. Journal of Science Coverage & Governance. 2020;16(2):1–7.

    7. Orwat C. Dangers of discrimination by using algorithms. Germany: Federal Anti-Discrimination Company; 2020.

    8. OECD. Measuring the environmental impacts of synthetic intelligence compute and functions the ai footprint [Internet]. 2022. Report No.: DSTI/CDEP/AIGO(2022)3/FINAL. Accessible from: https://www.oecd.org/content/dam/oecd/en/publications/reports/2022/11/measuring-the-environmental-impacts-of-artificial-intelligence-compute-and-applications_3dddded5/7babf571-en.pdf

    9. Mahmoud M. The dangers and vulnerabilities of synthetic intelligence utilization in data safety. In: 2023 Worldwide Convention on Computational Science and Computational Intelligence (CSCI) [Internet]. 2023 [cited 2025 Jun 6]. p. 266–9. Accessible from: https://ieeexplore.ieee.org/document/10590560

    10. Giebel GD, Raszke P, Nowak H, Palmowski L, Adamzik M, Heinz P, et al. Issues and obstacles associated to using AI-based scientific choice assist programs: interview research. Journal of Medical Web Analysis. 2025 Feb 3;27(1):e63377.

    11. van Giffen B, Herhausen D, Fahse T. Overcoming the pitfalls and perils of algorithms: A classification of machine studying biases and mitigation strategies. Journal of Enterprise Analysis. 2022 Could 1;144:93–106.

    12. Muley A, Muzumdar P, Kurian G, Basyal GP. Threat of AI in healthcare: a complete literature overview and research framework. Asian Journal of Medication and Well being. 2023;21(10):276–91.

    13. Douglas DM, Lacey J, Howard D. Moral threat for AI. AI Ethics. 2025 Jun 1;5(3):2189–203.

    14. Nationwide Institute of Requirements and Know-how (U.S.). Synthetic Intelligence Threat Administration Framework (AI RMF 1.0) [Internet]. Gaithersburg, MD; 2023 Jan [cited 2025 Jun 8] p. NIST AI 100–1. Report No.: NIST AI 100–1. Accessible from: http://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf

    15. Bogdanov D, Etti P, Kamm L, Stomakhin F. Synthetic intelligence system threat administration methodology based mostly on generalized blueprints. In: sixteenth Worldwide Convention on Cyber Battle. Tallinn: NATO CCDCOE Publications; 2024. p. 123–40.

    16. Polemi N, Praça I, Kioskli Ok, Bécue A. Challenges and efforts in managing AI trustworthiness dangers: a state of data. Entrance Massive Knowledge. 2024 Could 9;7:1381163.

    17. Al-Maamari A. Between innovation and oversight: a cross-regional research of AI threat administration frameworks within the EU, U.S., UK, and China [Internet]. arXiv; 2025 [cited 2025 Jun 8]. Accessible from: http://arxiv.org/abs/2503.05773



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article5 Things I Wish Someone Had Told Me Before I Became a CEO
    Next Article Explainable Anomaly Detection with RuleFit: An Intuitive Guide
    Team_AIBS News
    • Website

    Related Posts

    Machine Learning

    PatchMatch vs AI Inpainting — Why PatchMatch Still Excels at High Resolution | by Thuan Bui Huy | Aug, 2025

    August 4, 2025
    Machine Learning

    Retrieval‑Augmented Generation: Building Grounded AI for Enterprise Knowledge | by James Fahey | Aug, 2025

    August 3, 2025
    Machine Learning

    The Invisible Edge: Why Retail Traders Are Still Losing (and How AI Can Help) | by Neshanth Anand | Aug, 2025

    August 3, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    PatchMatch vs AI Inpainting — Why PatchMatch Still Excels at High Resolution | by Thuan Bui Huy | Aug, 2025

    August 4, 2025

    I Tried Buying a Car Through Amazon: Here Are the Pros, Cons

    December 10, 2024

    Amazon and eBay to pay ‘fair share’ for e-waste recycling

    December 10, 2024

    Artificial Intelligence Concerns & Predictions For 2025

    December 10, 2024

    Barbara Corcoran: Entrepreneurs Must ‘Embrace Change’

    December 10, 2024
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    Most Popular

    Leave phone bans to head teachers, children’s commissioner says

    April 10, 2025

    Agent AI: How Intelligent Agents Are Shaping the Future of Automation and Decision-Making

    June 6, 2025

    Anthropic can now track the bizarre inner workings of a large language model

    March 27, 2025
    Our Picks

    PatchMatch vs AI Inpainting — Why PatchMatch Still Excels at High Resolution | by Thuan Bui Huy | Aug, 2025

    August 4, 2025

    This company figured out how to reuse glass wine bottles, and it’s reshaping the Oregon wine industry

    August 4, 2025

    Retrieval‑Augmented Generation: Building Grounded AI for Enterprise Knowledge | by James Fahey | Aug, 2025

    August 3, 2025
    Categories
    • AI Technology
    • Artificial Intelligence
    • Business
    • Data Science
    • Machine Learning
    • Technology
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Aibsnews.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.