![]() Zero risk is an impossibility so long as anyone does anything. Their importance is particularly relevant because compliance with outdated regulations does not ensure the ethical and reputational safety of your organization.įinally, business leaders should help ensure that all risk is mitigated in a way that is compatible with business necessities and goals. Legal issues loom particularly large in light of the fact that it’s neither clear how existing laws and regulations bear on new technologies, nor what new regulations or laws are coming down the pipeline.Įthicists are there to help ensure a systematic and thorough investigation into the ethical and reputational risks you should attend to, not only by virtue of developing and procuring AI, but also those risks that are particular to your industry and/or your organization. Legal and compliance experts are there to help ensure that any new risk mitigation plan is compatible and not redundant with existing risk mitigation practices. ![]() Knowing where your organization is from a technological perspective can be essential to mapping out how to identify and close the biggest gaps. That is because, in part, various ethical risk mitigation plans require different tech tools and skills. You need the technologist to assess what is technologically feasible, not only at a per product level but also at an organizational level. After all, there are no good solutions without a deep understanding of the problem itself and the potential obstacles for proposed solutions. Their collective goal is to understand the sources of ethical risks generally, for the industry of which they are members, and for their particular company. At a minimum, we recommend involving four kinds of people: technologists, legal/compliance experts, ethicists, and business leaders who understand the problems you’re trying to solve for using AI. They should have the right skills, experience, and knowledge such that the conversations are well-informed about the business needs, technical capacities, and operational know-how. We recommend assembling a senior-level working group that is responsible for driving AI ethics in your organization. Here’s how you can set the table to have AI ethics conversations in a way that can make next steps clear. The first step, then, should consist of learning how to talk about it in concrete, actionable ways. ![]() A challenge, however, is that conversations about AI ethics can feel nebulous. To implement, scale, and maintain effective AI ethical risk mitigation strategies, companies should begin with a deep understanding of the problems they’re trying to solve. ![]() But once we’ve all agreed that biased, black box, privacy-violating AI is bad, where do we go from here? The question most every senior leader asks is: How do we take action to mitigate those ethical risks?Īcting quickly to address concerns is admirable, but with the complexities of machine learning, ethics, and of their points of intersection, there are no quick fixes. No one wants to be the object of a lawsuit or regulatory investigation for violations of privacy. No one wants to push out discriminatory or biased AI. The concerns, and the outcomes everyone wants to avoid, are largely agreed upon and well documented. Over the past several years, concerns around AI ethics have gone mainstream.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |