Multimedia platform releases ethics statement into generative AI

Cosmos Magazine

Cosmos

Cosmos is a quarterly science magazine. We aim to inspire curiosity in ‘The Science of Everything’ and make the world of science accessible to everyone.

By Cosmos

Global multimedia software maker Adobe will take an ethical approach to generative AI in the wake of red flags raised by artists and industry observers.

A blog by the company’s chief trust officer, Dana Rao, highlighted the importance of ethics while developing new AI tools. Generative AI has come much more into mainstream awareness in recent months with the launch of ChatGPT and DALL-E systems, which can understand written and verbal requests to generate convincingly human-like text and images.

Artists have complained that generative AIs being trained on their work is tantamount to ‘ripping off’ their styles or creating discriminatory or explicit content from harmless inputs.  Others have called into question the ease with which humans can pass off AI-generated prose as their own work.

Read more: Schools ban ChatGPT

Seemingly in response to this, Adobe’s three-pillared AI ethics statement promises responsible, accountable and transparent practice. It seeks to “minimise harmful outputs” by training Firefly, its new generative AI platform, on safe and inclusive data. Adobe and other content makers and platforms have already begun moves to improve content transparency through the Content Authenticity Initiative. Efforts to introduce provenance technology would also allow visual artists to bar AI programs from using their content.

“Creators want control over whether their work is used to train generative AI or not,” says Rao.

“For some, they want their content out of AI. For others, they are happy to see it used in the training data to help this new technology grow, especially if they can retain attribution for their work.

“With industry adoption [provenance tech] will help prevent web crawlers from using works with “Do Not Train” credentials as part of a dataset.”

Adobe rivals like Canva have introduced terms of use around their AI products, such as Text to Image, Magic Edit and Design, as well as safeguards to reduce safety breaches, and prevent its technology from creating medical, explicit and political content. Google’s AI principles similarly seek to create technology that is socially beneficial, safe and avoid creating or reinforcing biases. Earlier this month, Microsoft launched its Bing AI platform and laid-off its in-house platform ethics team.

Please login to favourite this article.