Demystifying AI Security: Building Safe Pathways for Educational Innovation

Presented by Kris HagelConference SessionFor AI Innovation Summit

Demystifying AI Security: Building Safe Pathways for Educational Innovation

This presentation by Kris Hagel, Chief Information Officer of the Peninsula School District, delivered in February 2025, addresses the critical topic of AI security in education, providing a framework for safe and responsible AI implementation. It aims to demystify the technical aspects of AI and offers practical guidance for educators.

Key Takeaways:

  • Understanding Generative AI: The presentation reviews fundamental concepts of generative AI:
    • GPT (Generative Pre-trained Transformer): Explains that GPT creates new content, is pre-trained on vast datasets (and that pre-training is largely frozen), and functions as a "guess the next token" machine.
    • Tokens: Defines tokens as the basic units of data processed by AI (words, pixels, letters).
    • Neural Network: Briefly mentions neural networks as the underlying structure of AI.
    • Large Language Models (LLMs) as Files: Emphasizes that LLMs are essentially large files, which allows for local operation.
  • Running LLMs Locally: Highlights the possibility of running LLMs locally, increasing data security.
  • Breaking Down ChatGPT's Functionality: Visually explains how ChatGPT processes user inputs, including:
    • Text Input: Text entered by the user is processed by the LLM.
    • System Prompt: (Mentioned with a link to Anthropic's documentation) A hidden prompt that guides the AI's behavior.
    • Conversation History: Saved conversations are stored in a database on OpenAI's servers.
    • Knowledge Addition: Uploaded documents are converted into a vector database (a mini neural network) and stored on OpenAI's servers.
    • Audio Input: Audio is transcribed into text and processed as plain text.
  • Extending the Concepts to Other Tools: The same principles of data handling are shown to apply to other AI tools like MagicSchool.ai and Colleague.ai, emphasizing the storage of conversation history, file uploads, and audio transcriptions.
  • Peninsula's Four-Point Data Security Model: The core of the presentation introduces a four-tiered model for categorizing AI tools based on data security: (While the presentation doesn't explicitly name the tiers, based on previous presentations, they are)
    • Sandbox AI
    • Lifeguard AI
    • Vault AI
    • Pocket AI
  • Existing District Guidance: The presentation emphasizes the importance of adhering to existing district guidelines on data privacy and security (FERPA, COPPA, CIPA) when using AI tools.
  • PSD's AI Website (psd401.net/ai): The presentation highlights the district's website as a resource for information and guidance on AI.

Actionable Insights:

  • Understand the Technical Basics of AI: Familiarize yourself with the core concepts of generative AI, including GPT, tokens, and LLMs.
  • Consider Running LLMs Locally: Explore the possibility of running LLMs locally for enhanced data security.
  • Analyze How AI Tools Handle Data: Understand how different AI tools (ChatGPT, MagicSchool.ai, Colleague.ai) process and store user inputs, including text, documents, and audio.
  • Apply the Four-Point Data Security Model: Categorize AI tools based on their data security levels and choose the appropriate tool for the sensitivity of the data being used.
  • Adhere to Existing District Guidelines: Always comply with district policies on data privacy and security when using AI tools.
  • Have signed SDPAs with AI Vendors: Be sure to have signed student data privacy agreements with AI vendors
  • Think about where your PII currently is stored: Consider the reality that much PII is already stored on large tech platforms, and what that means for risk.
  • Visit PSD's AI Website: Access psd401.net/ai for resources and guidance on AI in education.

Looking Ahead:

This presentation provides a clear and practical framework for understanding and addressing AI security concerns in education. By demystifying the technical aspects of AI, highlighting data handling practices, and introducing a tiered security model, the presentation empowers educators to make informed decisions about AI tool usage. The emphasis on existing district guidelines and the availability of resources on PSD's AI website reinforce the importance of a responsible and secure approach to AI integration. The presentation encourages ongoing dialogue and critical evaluation of AI tools to ensure that they are used safely and effectively to enhance teaching and learning.