Ethical AI is built on Transparency,
Accountability and Trust
By Myrah Abrar
AI (Artificial Intelligence) has been a game-changer. It is based on machine learning and helps with the auditing of financial data to detect financial fraud. However, if the data is biased, there are ethical risks that come in the way. This is why it is important to consider the principles of ethical AI.
What Are Ethics?
Before we dive deeper into the topic, it is important first to understand what ethics is. It is based on the standards of fairness, accountability, responsibility, integrity, and honesty. These standards, along with transparency, are at the root of AI. Ethical AI provides a foundation for the system.
According to a Deloitte survey, it was found that one of the top concerns of executives is the ethical risks related to AI. Since AI systems come with various ethical risks, including the way data has been collected and further processed, it can be difficult to trust the conclusions reached.
AI and Ethics
There are five principles of AI and ethics, as identified by KPMG. These provide a backbone to ethical AI.
- Workplace Transformation
As there would be huge changes in task and roles which define work as well as the increase in automated decision-making and powerful analytics, job displacement would become a reality. Hence, there would be a need to retrain employees.
- Establishment of Oversight and Governance
Guidelines will be established by the new regulations involving the use of ethical AI for the protection of the public’s wellbeing.
- Alignment of Ethical AI and Cybersecurity
Cybersecurity risks, including adversarial attacks, are more common than one might think when autonomous algorithms are used. Such attacks can easily disrupt the algorithms through tampering with the data, which makes strong cybersecurity essential.
- Mitigation of Bias
To eliminate any unfair bias, there is a need to understand the workings of autonomous algorithms as they continue to evolve.
- Increase in Transparency
Overall, management policies concerning the use of ethical AI should be based on universal standards of trust and fairness.
Human decision-making can be improved by AI. However, it does come with its limits. There is a risk that bias in algorithms would cause ethical risks that question the overall reliability of the data. Through explainability of data, it is possible to account for bias and reproduce consistent results.
Some other ethical risks that need to be considered include workforce transition, poor accountability, erosion of privacy, and lack of transparency. These risks impact the integrity of the AI system. Organizations need to clearly explain the way data is collected and used to build trust.
Ethics and Accountability
The House of Representatives in the United States introduced the Algorithmic Accountability Act of 2019 on 10th April 2019. The bill highlights the importance of risk assessment of automated decision systems, the security of personal information, and consumer privacy, as well as risks posed by the system.
Issues relating to governance and accountability are the responsibility of who develops the AI ethics standards, governs the data and AI system, and maintains internal controls. There has to be someone held accountable if unethical practices are detected.
Internal auditors have a huge role to play when it comes to assessing such risks and determining compliance with the set regulations along with reporting the findings to the audit committee directly.
AI Data Audit
An audit involves the examination of data to determine if it is reliable and accurate and whether the system used for generating is even operating as desired. Biased results will be produced by biased data.
For instance, a company that offers web development facilities might be offering services to more white customers than minorities. Thus, if the data is biased, the AI system would reproduce such results unintentionally.
With AI audits, it is possible to verify whether data has been properly recorded. The accounting standards need to be inputted accurately for the system to work.
Detection of Fraud
One of the main advantages of AI in auditing is its ability to detect fraud easily. It works by catching anomalies. For instance, if a reimbursable expense had been submitted, it should be tied to a receipt from a restaurant. The AI-driven, machine learning system would detect the fraud.
Around 5% of revenue is lost by companies every year due to occupational fraud, as revealed by an ACFE Report published in 2018. The report shows that the risk of occupational fraud is greater than anticipated. An average loss of $130,000 is found in each case.
Large amounts of data can be quickly analyzed by AI systems to determine whether assets are misappropriated or not thoroughly. Thus, AI Machines provide predictive value through the identification of high-risk areas. An accounting fraud prediction model can be devised for even more accurate detection.
To establish and enforce procedures in AI systems, corporate governance is critical. This is where the chief ethics and compliance officer would play an essential role through the identification of ethical risks, management of those risks, and ensuring that compliance of standards is met.
There is a need for implementing governance processes and structures for managing and monitoring the AI activities of an organization. The focus should on promoting transparency, accountability, and trust. In addition to this, compliance with regulations needs to be at the forefront.
Based on a research study published by Genesys, it was found that over one-half of the people surveyed stated that their companies do not have a policy concerning the ethical use of AI. The participants hailed from six countries such as the U.S., the U.K., New Zealand, Australia, Germany, and Japan.
Transparency and Accountability in AI – The Must Haves
All organizations need to address the importance of the ethical use of AI to establish trust in the system and meet stakeholder requirements concerning accurate and reliable information. An understanding of machine learning will prove useful in achieving that.
There is still a need for professional judgment for navigating AI and determining the value of the information developed by the system. An appropriate acronym that may be considered appropriate includes GIGO (Garbage In, Garbage Out).
If data is not provided and processed reliably, AI would only produce incoherent, incomplete, and inaccurate results. Therefore, the emphasis should always be on transparency, accountability, and trust.
Myrah Abrar is a computer science graduate with a passion for web development and digital marketing. She writes blog articles for ApCelero.
Did you find this article helpful? Share your thoughts with friends...