A surge in the use of generative AI tools across the industry for a variety of purposes, from drafting human-like text to churning out compelling images, has piqued public interest as well as concern about their capacity to deceive people and spread misinformation, among other risks.
To address the growing fear over the misuse of AI technologies and the regulation, President Joe Biden has already conducted a White House summit in May, which a number of technology executives attended. It was at this point that the administration reminded the industry that it was accountable for ensuring the safety of its technology.
Following Biden's warning, seven leading artificial intelligence firms have recently announced plans to unveil new voluntary safeguards designed to minimise abuse and prejudice within the emerging technology at a White House event on Friday.
Some of the executives who attended Friday's event at the White House and are committing to a transparency and security pledge are from Alphabet Inc. Microsoft, Amazon, and OpenAI.
Under the deal, these companies plan to put brand-new artificial intelligence systems through internal and external testing before releasing them, and will seek out outside teams to look for security flaws, discriminatory tendencies, or risks to Americans' rights, health information, or safety.
The companies, which include Anthropic and Inflection AI, additionally announced fresh pledges to share information with governments, civil society, and scholars in order to improve risk mitigation and to report vulnerabilities as they emerge. Leading AI companies are going to include virtual watermarks in the content they create, providing a way to differentiate between real images and videos and those generated by computers.
According to White House officials, the pledge helps reconcile the promise of artificial intelligence with the hazards, and it is the outcome of months of extensive behind-the-scenes lobbying.