The Association for Computer Machinery ("ACM") released eight Principles for the Development, Deployment, and Use of Generative AI Technologies. These guidelines included some well discussed principles such as transparency and auditability as well as some lesser discussed topics, such as the environmental impact of generative AI as opposed to traditional AI models. The eight principles included:
- Limits on deployment and use: Noting that providers should conduct a risk assessment and consider a hierarchy of risk levels from unacceptable risk to minimum risk.
- Ownership: Calling for IP laws to be updated to protect human creation without placing too many restrictions on access to permissible data.
- Personal data control: Suggesting that AI systems should allow a person to opt out from their personal data being used to train an AI model.
- Correctability: Urging for a mechanism to track and correct errors in the AI model.
- Transparency: Noting that disclosing the use of generative AI is important, particularly where AI is being used to simulate humans.
- Auditability: Calling for a recording of AI system models, algorithms, data, and outputs to allow for the system to be audited.
- Environmental impacts: Noting that the estimated carbon emissions of generative AI will exceed traditional AI models, and the environmental impact should be reduced where possible.
- Heightened security and privacy: Encouraging risk-mitigation to avoid security and privacy risks, particularly with respect to AI generated code which could have malware or "poisoned" training data.
These eight principles joined five previously articulated concepts for generative AI, including: (1) legitimacy, (2) minimize harm, (3) interpretability, (4) maintainability, and (5) accountability.
ACM summarizes that "great care must be taken in researching, designing, developing, deploying, and using" generative AI, and "existing mechanisms and modes for avoiding such harm likely will not suffice."