Building an AI Policy: Practical Considerations

How is Knology handling the widespread use of artificial intelligence?

by Corinne Brenner
Apr 29, 2026

In the three and a half years since OpenAI's ChatGPT was publicly released, the world has been reckoning with both practical applications and theoretical implications of large language models (LLMs) as a type of artificial intelligence. This watershed moment in the development of technology comes with a set of very enticing promises: tools (whether standalone products like Claude or Gemini, or new features embedded in software you use) that will save you time, take less effort, and produce higher-quality work than you could on your own.

Recently at Knology, we have been examining the opportunities and risks that LLMs pose to our work. To do this, we first created an official AI committee, which was given the task of understanding how this technology could improve our processes and work output, as well as the new risks and challenges it poses. The committee meets regularly to evaluate and potentially approve AI-enabled tools, discuss emerging use cases, and share industry developments.

While AI-enabled tools have the potential to support our research processes, they also introduce significant risks to data and operational security, intellectual property, and work quality. The whole context of how LLMs are developed, operated, and used at a small scale in our workflows and more broadly in society raises ethical questions. As a research-to-practice organization, we are actively examining whether and how to integrate these tools without compromising the integrity of our work. While there is an active public dialogue on subjects like new workflows and ways to craft effective prompts (along with principled discussions of why to avoid AI tools altogether), we wanted to take a structured, adaptable approach to building an AI policy that would suit a rapidly shifting landscape.

Behind the Scenes: Large Language Models

Understanding how these tools work has been foundational in our approach to developing policies and procedures for managing AI. Tools like ChatGPT and Claude are built on LLMs. While these can produce stunningly human-sounding results, it is important to remember they do not operate as a reference for verified information. They do not have an underlying concept of the world. An LLM uses statistics to predict text. Our experience of using them can be like talking to a friendly, confident, knowledgeable peer, but they simply are not that.

Developing Risk Assessment and Mitigation Strategies

Earlier this year, we circulated a brief survey among staff members to understand the landscape of LLM use we have encountered as a group. The survey asked what tools people had used at Knology or outside of work; positive and negative experiences they'd had; and risks, concerns, and potential benefits of tools on their work. Although the survey was simple, it provided a grounded starting point to understand people's levels of familiarity, their needs and concerns, and the situations they've encountered with AI-enabled tools. Based on the survey results, we specified a set of use cases (that is, types of interactions a user has with technology to achieve distinct goals) for AI use at Knology. These include brainstorming, drafting, and editing written work; transcribing interviews or focus groups conducted for research; or summarizing and integrating academic papers or materials related to projects.

After defining these cases, we assessed risks (and risk mitigation strategies) for each of these cases by turning to publicly available resources like NTEN's AI for Nonprofits, DataKind's GenAI Use Case Selection and Risk Assessment for Nonprofits, and the NIST AI 100-1 Artificial Intelligence Risk Management Framework. This is an ongoing process. At present, we have a living document open to revision based on new experiences and further developments in the technology and tools we use.

Based on our mapping of risks and use cases, several categories of risk have emerged, including:

  • Data security
  • Loss of transparency by using closed-source tools (and due to the complex nature of LLMs)
  • Ethics related to human subjects research as well as broad ethical principles
  • Risks to work quality
  • Misuse by bad actors

We are now in the process of developing mitigation strategies for use cases in each category, and are considering these strategies in policy, procedures, and ongoing training.

A Specific Use Case: AI Notetakers

As an example of risks for one use case, consider notetaking tools like Otter.ai, Granola, and Fireflies. Tools like these are often configured to automatically join virtual meetings and produce summaries, transcripts, and lists of action items. They are advertised as a tireless assistant: more available, alert, and accurate than any person could ever be.

However, no tool works perfectly. Sometimes they record the wrong word. Sometimes they misunderstand the gist of a conversation, or erroneously record a decision that was later changed. Beyond that, there are questions about these tools that can't be easily answered-such as:

  • How well do they handle translation?
  • Where is the data being processed?
  • How long is data stored, and does this comply with our IRB data storage and retention policy?
  • Is the data being used to train the company's underlying LLM?

This last question is particularly thorny. If the answer is "yes," then one must ask: is it possible for someone else to retrieve that transcript intentionally or unintentionally, as researchers have demonstrated with written works? In a legal sense, this has even been ruled to fall within the US's Fair Use guidelines as a transformation of the work, with implications for intellectual property shared during a meeting.

In terms of the categories mentioned above, it is clear that data security and work quality are risks. If we don't know how the transcript of that meeting is being stored or used, using a tool like this might create a situation where personal information or intellectual property could be unintentionally exposed. An AI-generated transcript may also include errors, which could damage the quality of our work and research.

To mitigate the risks of notetakers in general, we can:

  • Ask questions and review the terms of service for AI-enabled notetaking tools to determine how data is stored and used
  • Ask participants in virtual meetings not to use notetakers, and remove them from the meeting if needed
  • Check AI-generated transcripts for errors, and compare them against recordings if available
  • Share transcripts and incorporate corrections from meeting participants

When used for research, we can additionally:

  • Include language in consent forms that discloses the use of AI notetakers
  • Remind participants about the use of AI notetakers at the beginning of the meeting
  • Allow participants to opt out of AI notetakers for transcription

Let's Put it to Work!

After laying the groundwork by identifying use cases and assessing risk, we adopted an Artificial Intelligence Governance Policy to guide the use of AI for research and operations at Knology. This policy also establishes the monitoring and auditing function of the AI Committee. We are building out specific procedures to learn about AI developments and potentially useful integrations, evaluate and approve AI-enabled tools, handle incidents, and evaluate how well these tools meet research and operational needs. While we haven't issued broad prohibitions against particular AI products, we have created some general rules and principles for staff, including:

1) Only use AI tools that do NOT train models on user data, and that have known data retention policies, which the AI committee has reviewed and approved;

2) Do not include research participants' personally identifying information (PII) in prompts;

3) Critically review all outputs, and take responsibility for the final product (do not simply copy and paste output from an LLM).

As LLMs continue to evolve, we remain committed to a "human-in-the-loop" philosophy of experimentation with various tools, ensuring that they serve as extensions of our expertise rather than as replacements for our judgment. We will continue to update our frameworks as the landscape shifts, keeping our focus on research that is responsible and impactful.

About This Article

Interested in learning more about our AI work? Take a look at some of the other research we've published on AI and trust-specifically, about building trustworthy AI and whether AI can be considered a trustworthy research partner.

Photo courtesy of Zulfugar Karimov @ Unsplash

Join the Conversation
What did you think of this? How did you use it? Is there something else we should be thinking of?
Support research that has a real world impact.