Global multimedia software maker Adobe will take an ethical approach to generative AI in the wake of red flags raised by artists and industry observers.
A blog by the company’s chief trust officer, Dana Rao, highlighted the importance of ethics while developing new AI tools. Generative AI has come much more into mainstream awareness in recent months with the launch of ChatGPT and DALL-E systems, which can understand written and verbal requests to generate convincingly human-like text and images.
Artists have complained that generative AIs being trained on their work is tantamount to ‘ripping off’ their styles or creating discriminatory or explicit content from harmless inputs. Others have called into question the ease with which humans can pass off AI-generated prose as their own work.
Read more: Schools ban ChatGPT
Seemingly in response to this, Adobe’s three-pillared AI ethics statement promises responsible, accountable and transparent practice. It seeks to “minimise harmful outputs” by training Firefly, its new generative AI platform, on safe and inclusive data. Adobe and other content makers and platforms have already begun moves to improve content transparency through the Content Authenticity Initiative. Efforts to introduce provenance technology would also allow visual artists to bar AI programs from using their content.
“Creators want control over whether their work is used to train generative AI or not,” says Rao.
“For some, they want their content out of AI. For others, they are happy to see it used in the training data to help this new technology grow, especially if they can retain attribution for their work.
“With industry adoption [provenance tech] will help prevent web crawlers from using works with “Do Not Train” credentials as part of a dataset.”