美高梅官网

AI Policies

AI Policy Overview

The content below outlines policy areas and references relevant 美高梅官网 policies that apply to the use of AI and Generative AI within the university's ecosystem. This ensures AI enablement across the institution aligns with the university's broader data governance and information security standards.

These policies are integral to guiding the ethical, secure, and responsible use of AI technologies at the university. For more information, visit the 美高梅官网's policy page.

  • Policy Name:聽Data Governance & Classification Policy
  • Relevant AI Application:聽AI Data Management
  • Following the existing data governance and classification policy helps to ensure that AI models only use data classified and handled according to university standards, safeguarding sensitive information and ensuring compliance with data governance practices.
  • Policy Name:聽Acceptable Use of Information Technology Policy
  • Relevant AI Application:聽AI Tools and Resources
  • Following the acceptable use policy helps to regulate the ethical and appropriate use of AI technologies, ensuring they align with the university's IT usage standards, including restrictions on generating or using unethical AI content.
  • Policy Name:聽Information Security Incident Management & Response Policy
  • Relevant AI Application:聽AI System Security
  • Following the acceptable use policy helps to regulate the ethical and appropriate use of AI technologies, ensuring they align with the university's IT usage standards, including restrictions on generating or using unethical AI content.
  • Policy Name:聽Electronic & Information Technology (EIT) Accessibility Policy
  • Relevant AI Application:聽Inclusive AI Development
  • Helps to ensure that AI tools and applications are accessible to users with disabilities, promoting inclusivity in the development and deployment of AI solutions across the university.
  • Policy Name:聽Vulnerability Management Policy
  • Relevant AI Application:聽AI Vulnerability Assessment
  • Addresses potential vulnerabilities in AI systems, ensuring regular assessment and mitigation of risks associated with AI implementations, such as susceptibility to adversarial attacks.