News & Events

EU, U.S. Making Moves to Address Ethics in AI

The United States and European Union are divided by thousands of miles of the Atlantic Ocean, and their approaches to regulating AI are just as vast. The landscapes are also dynamic, with the latest change on the U.S. side set to roll out today—about seven weeks after a big move in the EU.

The stakes are high on both sides of the Atlantic, with repercussions in practices as disparate as determining prison sentences to picking who gets hired.

The European Union’s Artificial Intelligence Act (AIA), which was approved by the Council of the EU on Dec. 6 and is set to be considered by the European Parliament as early as March, would regulate AI applications, products and services under a risk-based hierarchy: The higher the risk, the stricter the rule.

If passed, the EU’s AIA would be the world’s first horizontal—across all sectors and applications—regulation of AI.

In contrast, the U.S. has no federal law specifically to regulate the use of AI, relying instead on existing laws, blueprints, frameworks, standards and regulations that can be stitched together to guide the ethical use of AI. However, while business and government can be guided by frameworks, they are voluntary and offer no protection to consumers who are wronged when AI is used against them.

Adding to the patchwork of federal actions, local and state governments are enacting laws to address AI bias in employment, as in New York City and the entire state of California, and insurance, with a law in Colorado. No proposed or enacted local law has appeared in the news media to address using AI in jail or prison sentencing. However, in 2016, a Wisconsin man, Eric Loomis, unsuccessfully sued the state over a six-year prison sentence that was based, in part, on AI software, according to a report in The New York Times. Loomis contended that his due process rights were violated because he could not inspect or challenge the software’s algorithm.

“I would say we still need the foundation from the federal government,” Haniyeh Mahmoudian, global AI ethicist at DataRobot, told EE Times. “Things around privacy that pretty much every person in the United States is entitled to, that is something that the federal government should take care of.”

The latest national guideline is expected to be released today by the National Institute of Standards and Technology (NIST).

NIST’s voluntary framework is designed to help U.S.-based organizations manage AI risks that may impact individuals, organizations and society in the U.S. The framework does this by incorporating trustworthiness considerations, such as explainability and mitigation of harmful bias, into AI products, services and systems.

“In the short term, what we want to do is to cultivate trust,” said Elham Tabassi, chief of staff in the Information Technology Laboratory at NIST. “And we do that by understanding and managing the risk of AI systems so that it can help to preserve civil liberties and rights and enhance safety [while] at the same time provide and create opportunities for innovation.”

Longer term, “we talk about the framework as equipping AI teams, whether they are primarily people designing, developing or deploying AI, to think about AI from a perspective that takes into consideration risks and impacts,” said Reva Schwartz, a research scientist in NIST’s IT lab.

Prior to the release of NIST’s framework, the White House under President Joe Biden issued its “Blueprint for an AI Bill of Rights” in October. It lays out five principles to guide the ethical use of AI:

  • Systems should be safe and effective.
  • Algorithms and systems should not discriminate.
  • People should be protected from abusive data practices and have control over how their data is used.
  • Automated systems should be transparent.
  • Opting out of an AI system in favor of human intervention should be an option.

Biden’s regulation-lite approach seems to follow the light regulatory touch favored by his immediate predecessor.

By EETimes

Link:https://www.eetimes.com/eu-u-s-making-moves-to-address-ethics-in-ai/

商品分類

最新訊息