How to sound an alarm
In theory, external whistleblower protections could play a valuable role in detecting AI risks. These can protect employees who are fired for exposing corporate actions, and they can help compensate for inadequate internal reporting mechanisms. Almost every state has a public policy exception to at-will dismissal — in other words, fired employees can seek relief against their employers if they are retaliated against for unsafe or illegal company practices. However, in practice this exemption provides some guarantees to employees. Judges tend to side with employers in whistleblower cases. Because the community has yet to reach a consensus on what qualifies as unsafe AI development and deployment, AI labs are more likely to survive such lawsuits.
These and other flaws explain why the aforementioned 13 AI workers, including former OpenAI employee William Saunders, called a novel “warning right.” Institutions must provide an anonymous process for employees to raise concerns about risk to the laboratory’s board, regulatory authority, and an independent third body composed of subject matter experts. The inputs to this process have yet to be discovered, but it may be a formal, bureaucratic mechanism. The board, regulator and third party should all make a record of the disclosure. It seems that every organization will start some sort of investigation. Subsequent meetings and hearings seem to be a necessary part of the process. And if Sanders is to be taken at his word, what will AI workers do? In fact Wanting is different.
When Sanders left Great tech podcast While outlining his best process for sharing safety concerns, his focus is not on formal ways to report established risks. Instead, he indicated a preference for some transitional, informal action. He wants an opportunity to get neutral, expert opinions on whether a security concern is substantial enough to warrant a “high stakes” process such as a lien-alert system. Current government regulators, as Sanders says, are unable to play that role.
For one, they don’t have the expertise to help an AI worker think through security concerns. What’s more, as Sanders said on the podcast, few workers will pick up the phone if they know it’s a government official. Instead, he feels he can call a professional to discuss his concerns. In an ideal scenario, the risk in question would not seem serious or actionable, and peace of mind would free him to get back to whatever he was doing.
Reduction of stocks
What Sanders hears on this podcast is not a right to warn, if the employee already believes that unsafe or illegal activity is taking place. What he’s really calling for is a gut check—an opportunity to verify whether suspicions of unsafe or illegal behavior are warranted. The stakes will be very low, so the regulatory response will be light. The third party responsible for weighing these gut tests can be a very informal one. For example, AI PhD students, retired AI industry employees, and other individuals with AI expertise can volunteer for the AI ​​Safety Hotline. They may be tasked with quickly and efficiently discussing safety matters with employees through confidential and anonymous telephone conversations. Hotline volunteers are familiar with lead safety procedures, as well as detailed knowledge of what options may be available to the employee, such as lien-alert mechanisms.
As Sanders noted, some employees may want to go from 0 to 100 with their security concerns—straight from co-workers to the board or even a government agency. They are more likely to raise their issues if an intermediary, informal process is available.
Reading examples elsewhere
The details of how precisely the AI ​​safety hotline will work deserve more debate among AI community members, regulators, and civil society. For the hotline to realize its full potential, for example, some means of increasing the most urgent, verified reports to the appropriate authorities may be needed. How to ensure the confidentiality of hotline conversations is another matter that needs thorough investigation. Another key question is how to recruit and retain volunteers. Given the broad concern among leading experts about AI risk, some may be willing to participate simply because of the willingness to lend a hand. If too few people advance, other incentives may be required. An essential first step, however, is acknowledging this missing piece in the puzzle of AI security regulation. As a next step, models are needed to build the first AI hotline.
One place to start is with ombudsmen. Other industries have recognized the value of identifying these neutral, independent individuals as sources for assessing the seriousness of employee concerns. There are ombudsmen in academia, nonprofit organizations, and the private sector. A distinguishing characteristic of these individuals and their staff is neutrality – they have no incentive to favor one side or the other, making them more likely to be trusted by all. A look at the use of ombudsmen in the federal government shows that when they are present, issues can be raised and resolved more quickly than when they are not.
#safety #hotline