Mon. Dec 30th, 2024

US president Joe Biden announced new guidelines for the safe development of AI

AFP via Getty Images

An executive order on artificial intelligence issued by US president Joe Biden aims to show leadership in regulating AI safety and security – but most of the follow-through will require action from US lawmakers and the voluntary goodwill of tech companies.

Biden’s executive order directs a wide array of US government agencies to develop guidelines for testing and using AI systems, including having the National Institute of Standards and Technology set benchmarks for “red-team testing” that can probe for potential AI vulnerabilities prior to public release.

“The language in this executive order and in the White House’s discussion of it suggests an interest in being seen as the most aggressive and proactive in addressing AI regulation,” says Sarah Kreps at Cornell University in New York.

It is probably “no coincidence” that Biden’s executive order came out just before the UK government convened its own AI summit, says Kreps. But she cautioned that the executive order alone will not have much impact unless the US Congress can produce bipartisan legislation and resources to back it up – something that she sees as unlikely during the US 2024 election year.

This follows the trend of non-binding actions by the Biden administration on AI. For example, last year the administration issued a blueprint for an AI Bill of Rights, and earlier this year it solicited voluntary pledges from major companies developing AI, says Emmie Hine at the University of Bologna, Italy.

One potentially impactful part of Biden’s executive order covers foundation models – large AI models trained on huge datasets – if they pose “a serious risk to national security, national economic security, or national public health and safety”. The order uses another piece of legislation called the Defense Production Act to require that companies developing such AIs notify the federal government about the training process and share the results of all red-team safety testing.

Such AIs could include OpenAI’s GPT-3.5 and GPT-4 models, which are behind ChatGPT, Google’s PaLM 2 model, which supports the company’s Bard AI chatbot, and Stability AI’s Stable Diffusion model, which generates images. “It would force companies that have been very closed-off about how their models work to crack open their black boxes,” says Hine.

But Hine said “the devil is in the details” when it comes to how the US government defines which foundation models pose a “serious risk”. Similarly, Kreps questioned the “qualifiers and ambiguities” of the executive order’s wording; the document is unclear about how “foundation model” is defined and who determines what poses a threat.

The US also still lacks the sort of strong data protection laws seen in the European Union and China. Similar laws could support AI regulations, says Hine. She pointed out that China has focused on implementing “targeted, vertical laws addressing specific aspects of AI” such as generative AIs or facial recognition uses, whereas the European Union has been working to create political consensus among its members on a broad “horizontal approach” covering all aspects of AI.

“[The US] has the [AI] development chops, but it doesn’t have much concrete regulation to stand on,” says Hine. “What it does have is strong statements about ‘AI with democratic values’ and agreements to cooperate with allied countries.”

Topics:

Checkout latest world news below links :
World News || Latest News || U.S. News

Source link

The post Biden executive order: How the US is trying to tame AI appeared first on WorldNewsEra.

By

Leave a Reply

Your email address will not be published.