Putting the AI in “Guardrails”

Information guardrails are being formulated by governments and enabled by AI, LLMs, and ML

Putting the AI in “Guardrails”

The recent leak of classified documents via a Discord server has planted a seed in my mind — that, perhaps, the convergence of these events with the hedging and cautious language in last summer’s OSTP memorandum about classified research may lead to a tightening of government control over what, how, and when government-funded research is disseminated, rather than the free-for-all envisioned by proponents of OA and open science.

Other forces are spurring thinking about regulating the information sphere more tightly — introducing guardrails around what gets disseminated, how those who disseminate information are held accountable, and what kinds of costs and responsibilities entities distributing information have to assume in order to do it with confidence (i.e., without being slapped with restraining orders, fined out of existence, or sued out of existence via civil litigation).

The new trends in self-regulation have started to seem normal, but to me there’s been a notable change — the way Facebook, Instagram, TikTok, and others are filtering information, determining what gets through and what does not, makes them more explicitly like media companies, pushing their identities into a risky zone where libel and defamation laws may become more applicable than Section 230 protections.

Pressures around technology-driven communication platforms and their role in the public sphere are only going to grow. The regulation of large-language models (LLMs) behind AI systems is already being talked about, and there is convergence between these conversations and those involving information privacy and ownership.

There’s an interesting twist here, which I noted recently, flagging a paper asserting that algorithms are a form of bureaucracy. The paper that inspired this post also reflects on the changes going on with platform companies:

. . . platform companies have become knowledge intermediaries, like newspapers or school curriculum boards, while insulating themselves from traditional accountability. . . . the invisibility of hierarchy allows these knowledge intermediaries to justify themselves on laissez-faire principles, not telling the public what to trust, even while they quietly sink deeper into the Augean mire of moderating offensive, false, or untrue content.

So, how do LLMs, platforms, and classified materials come together? What new guardrails are emerging?