Fri. Dec 27th, 2024

Every day the American military entrusts the world’s most powerful weapons to hundreds of thousands of service members stationed across the globe, the vast majority of whom are under 30 years old. The military mitigates the potential risks of all this powerful technology deployed globally in the hands of young and frequently novice users through a three-pronged approach: they regulate the technology, the users, and their units. The government has the opportunity to do the same with AI.

[time-brightcove not-tgx=”true”]

Depending on the task, service members are required to successfully complete courses, apprenticeships, and oral examinations before being given the authority to drive a ship, fire a weapon, or even in some cases, perform maintenance tasks. Each qualification reflects how technologically complicated the system may be, how lethal it might be, and how much authority the user will be given to make decisions. More than that, knowing that even qualified people get tired, bored, or stressed, the military has a backup system of standard operating procedures (SOPs) and checklists that ensure consistent, safe behavior—something surgeons, for example, have imitated.

Risk reduction in the military goes beyond individuals to also encompass units. “Carrier quals,” for example, are not just for individual pilots. They must also be earned through the joint demonstration of an aircraft carrier and its assigned air wing (the group of pilots). Unit qualifications emphasize teamwork, collective responsibility, and the integrated functioning of multiple roles within a specific context. This ensures that every team member is not only proficient in their own tasks but also fully understands their duties within a larger context.

Finally, to complement qualifications and checklists, the military separates and delineates authorities to different individuals depending on the task and the level of responsibility or seniority of the individual. For example, a surface warfare officer with weapons release authority must still request permission from the ship’s captain to launch certain types of weapons. This check ensures that individuals with the proper authority and awareness have the opportunity to address special categories of risk—like those that may escalate a conflict or reduce the inventory of a particularly important weapon.

These military strategies for addressing risks should inspire conversations about how to regulate AI because we have seen similar approaches work for other, non-military communities. Qualifications, SOPs, and delineated authorities already complement technical and engineering regulations in sectors like healthcare, finance, and policing. While the military has the unique ability to enforce such qualification regimes, these frameworks can also be effectively applied in civilian sectors. Their adoption can be driven by demonstrating the business value of such tools, through government regulation, or by leveraging economic incentives.

The primary advantage of a qualification regime would be to limit access to potentially dangerous AI systems to only vetted and trained users. The vetting process helps to reduce the risk of bad actors, like those who would use it to produce text or video that impersonates public figures or even to stalk or harass private citizens. The training helps to reduce the risk that well-intentioned people who nonetheless don’t fully understand these technologies will use them not as intended, like a lawyer who uses ChatGPT to prepare a legal brief.

To further enhance accountability for individual users, certain qualifications, for example designing bespoke biological agents, could require users to have a unique identifier, akin to a national provider identifier or a driver’s license number. This would enable professional organizations, courts, and law enforcement to effectively track and address instances of AI misuse, adding a mechanism for accountability that our legal system well understands.

Complementing individual qualifications with organizational qualifications can make for even more robust, multi-layered oversight for especially high-performance systems that serve mission-critical functions. It reinforces that AI safety is not just an individual responsibility but an organizational one as well. This qualification approach would also support the development of delineated responsibilities that would restrict especially consequential decisions to those who aren’t just qualified but are specifically authorized, akin to how the Securities and Exchange Commission (SEC) regulates who can engage in high-frequency trading operations. In other words, it will not be enough for a user to simply know how to use AI; they must also know when it is appropriate to do so and under whose authority.

Qualifications and checklists can have secondary benefits as well. Designing, administering, and monitoring them will create jobs. National as well as state governments can become the qualifying agencies, professional associations can become the leaders in safety research and accompanying standards. Even AI companies could benefit economically from supporting qualification training programs for their individual systems.

The idea of implementing a qualification or licensing system for AI use presents a compelling yet complex set of opportunities and challenges. The framework could substantially improve safety and accountability but there will be hurdles and potential drawbacks as well, the first of which may be to create barriers to accessing these tools and a less diverse field of practitioners. Qualifications regimes also come with bureaucratic overhead and there is a risk that different jurisdictions will create different qualifications that unnecessarily impede innovation and an efficient global AI market. And of course, qualifications may only complicate, not necessarily prevent bad actors intent on harm.

These drawbacks have to be taken in context, however. In the absence of a well-thought-out approach to qualifications, we are forced to rely exclusively on regulating engineering approaches—a process bound to also be bureaucratic, slow, and never sufficient on its own.

While the benefits of a licensing or qualification system could be significant in terms of enhancing safety and responsibility, the logistical, ethical, and practical challenges warrant careful consideration. That consideration cannot delay action toward qualification regimes, however, as these technologies are spreading quickly.

Governments and professional societies can start now to establish or simply designate trusted agents for priority sectors or applications and give them their first task: gathering and analyzing incidents of AI harm. Databases on AI incidents or autonomous vehicle crash reports, help oversight organizations better understand risks as they develop training and qualification regimes.

Beyond the first step of documenting harm, regulatory agencies need to start piloting qualification mechanisms and sharing lessons learned for iterative improvement. Multiple pilots could be run in different locales and different markets to learn in parallel and help better evaluate the costs and benefits from different regulatory approaches. Alongside this, we need to continue to develop educational initiatives to improve AI literacy in the U.S. for these AI systems that will become a part of everyday life, like internet search engines, starting with K-12, community and four-year colleges, and other post-secondary educational programs.

Human and technological safeguards must act in harmony to mitigate the risks of AI—focusing on end-user qualifications shouldn’t deter efforts to develop inherently safer technologies in the first place. But we need to regulate and empower individuals to seize the opportunities and mitigate the risks of AI. Let’s learn from the military qualification process to create practical, effective steps that ensure that those who use AI are qualified to do so.

By

Leave a Reply

Your email address will not be published.