Kaltura Legal
Kaltura’s Artificial Intelligence Principles
Responsible Use Policy
Last updated: February 2, 2026
- Introduction
Kaltura recognizes the transformative potential of Artificial Intelligence (AI), encompassing both Machine Learning (ML) and Generative AI (GenAI). We are devoting efforts to integrate AI responsibly and ethically into our products and services. These principles serve as the foundation for our approach to AI development, aimed to ensure innovation goes hand-in-hand with transparency, security, and the best interests of our customers. Capitalized terms not defined in this Responsible Use Policy shall have the meanings given to them in Kaltura’s Master License And Professional Services Agreement and in Kaltura’s AI Addendum.
- Kaltura Principles and Obligations of AI Systems
Kaltura’s use of AI Systems is guided by these core principles:
- Customer Data Protection: We prioritize the privacy and security of our customers’ data. Kaltura does not use customer data to train its AI models, nor do we share customer data with third parties for the purpose of training their AI models. Any use of customer data in delivering AI-powered functionalities strictly adheres to applicable data privacy laws, regulations, and the terms of our customer agreements, and in particular as explained in our privacy policy available at https://corp.kaltura.com/legal/privacy/privacy-policy/.
- Accountability: We design our AI Systems to enable proper functioning throughout their lifecycle and ensuring they are designed, developed, operated and deployed in accordance with their objectives and applicable regulatory frameworks, supported by our ISO/IEC 42001 certification, which formalizes our approach to accountable AI governance across the AI lifecycle. Kaltura implements comprehensive accountability measures including:
- Documentation: Maintaining comprehensive documentation of AI model development and deployment decisions.
- Impact Assessments: Conducting risk and impact assessments for AI Systems.
- Incident Response: Maintaining incident response procedures to address AI-related issues promptly and effectively.
- Third-Party Audits: Where appropriate, engaging independent third-parties to assess our AI practices and compliance with these principles.
- Governance Structure: Maintaining internal governance structures to oversee AI development and deployment decisions.
- Quality Assurance: Validating the quality and accuracy of Outputs, such as text, clips, and links, by verifying them against trusted sources, including captions, OCR, and documents.
- Transparency and Control: Kaltura is committed to clear communication regarding our use of AI. We strive to clearly indicate when AI is being utilized to deliver features within our products. Customers will have the ability to opt-out from specific AI Systems and retain control over their AI preferences. In addition, Kaltura commits to:
- AI-Generated Content Disclosure: When AI generates or significantly modifies content will be disclosed and indicated to the end user (where applicable). AI-generated captions or summaries will include a notice that they were automatically generated and may require human review for accuracy.
- Limitations Communication: We will clearly communicate known limitations of AI Systems to customers.
- Regular Reporting: We will provide periodic updates on our AI practices, improvements, and any significant changes to our AI Systems.
- Fairness and Bias Mitigation: Kaltura is constantly striving to develop and deploy AI that is fair, unbiased, and inclusive and respects the rule of law, human rights, democratic values and diversity. We actively make efforts to mitigate potential biases throughout our development lifecycle – from data selection and algorithm design to ongoing monitoring of deployed AI models.
- Security and Reliability: Kaltura is committed to a secure and reliable approach to AI, following industry best practices. Our aim is to protect AI Systems and customer data while supporting dependable and trustworthy AI functionality. These efforts include protecting the integrity of video content, preventing unauthorized access to Outputs, etc.
- Continuous Improvement: Responsible AI is an ongoing journey for Kaltura. We are continually evaluating and refining our Responsible Use Policy and practices, in order to align with industry best practices and evolving regulations.
- Usage of Third-Party GenAI Features: Certain AI Systems use third-party GenAI providers, such as AWS (Bedrock), Google (Gemini) and a self-hosted Speech to Text and Text to Video models. Text to Speech and Vision Language Model models can be either self-hosted by Kaltura or using a third-party provider. Such services shall be identified as such and detailed in each specific work order the customer signs with Kaltura (including by reference to a list that shall be posted in our website). The third-party GenAI services that Kaltura uses do not use your data to train their model. For EU cloud customers only – please note some AI Features may be provided from different locations within the EU.
- Accessibility and Inclusion: Kaltura is committed to ensuring our AI Systems promote accessibility and inclusion:
- AI-powered features (such as automated captioning and transcription) are designed to enhance accessibility for customers with disabilities.
- We strive to ensure AI Systems work effectively across diverse languages, accents, and communication styles.
- We actively work to prevent AI Systems from creating or perpetuating barriers to access for any user groups.
- Customer Responsibilities Regarding Use of AI Systems
This Responsible Use Policy sets forth the standards and requirements for the proper use of AI Systems. It applies to all Licensees, Users, customers, developers, integrators, deployers, and any other individuals or entities who access or use the AI Systems.
In addition to the obligations set forth in the Acceptable Use Policy | Kaltura, which apply to any customer utilizing the Hosted Services and/or Software described in the applicable Order Form(s), the following requirements also apply to the use of AI Systems:
- Informed Consent: Obtaining necessary consent and permissions before utilizing AI Systems on any data that may require specific authorization.
- Human Oversight: Recognizing that Outputs should be subject to human review and judgment, especially in critical decision-making processes. Customers should ensure:
- Critical Decisions: Outputs used in critical decision-making processes (such as content moderation, accessibility determinations, or educational assessments) must be subject to meaningful human review and verification.
- High-Risk Applications: Customers are responsible for verifying the accuracy of AI-generated content before public distribution or use in high-stakes contexts. For high-risk use cases, customers should implement appropriate oversight mechanisms, including regular audits and quality checks.
- Error Reporting: Customers should establish processes for identifying and reporting AI errors or unexpected behaviors to Kaltura.
- Training: Users of AI Systems should receive appropriate training to understand the capabilities, limitations, and proper use of AI functionalities per their organizations’ guidelines and practices.
- Permission and Authorizations: Customers must keep adequate security policies to manage and monitor end-users permissions and access to the AI Systems.
- Feedback and Collaboration: Providing Kaltura with feedback on AI Systems to help us improve their functionality, fairness, and effectiveness.
- Prohibited Uses
Customers may not use AI Systems for any purpose other than the execution of the services offered by Kaltura, and shall not allow end users to use such features, for the following purposes:
Illegal or Harmful Content
- Creating, uploading, processing, distributing, or promoting content that is illegal, encourages violence, self-harm, terrorism, violent extremism, or illegal activities.
- Generating, storing, or distributing content that exploits or harms minors, including any form of child exploitation material or Child Sexual Abuse Material (CSAM), whether real or artificial.
- Creating or distributing content that is harmful, violent, hateful (including hate speech), or intended for bullying, intimidation, or humiliation of individuals or groups.
Deceptive or Manipulative Practices
- Using AI Systems to create or distribute manipulated or synthetic video or audio content (deepfakes) intended to deceive or manipulate without clear and prominent disclosure.
- Impersonating individuals (living or deceased) in video or audio content without explicit consent and clear disclosure.
- Using captioning, transcription, translation, or summarization tools to intentionally misrepresent, distort, or fabricate content.
- Generating or distributing AI-manipulated political campaign materials or content intended to influence political processes through deception, without appropriate disclosure.
Privacy Violations and Unauthorized Surveillance
- Using AI Systems to identify, analyze, track, or profile individuals in video or audio content based on biometric data (such as facial recognition, voiceprints, or gait analysis) without proper consent, legal basis, and appropriate safeguards.
- Using AI-powered video analytics or transcription to monitor or surveil individuals without their knowledge and consent.
- Explicitly predicting or categorizing individuals based on protected characteristics, including racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, age, gender, sexual orientation, disability, or health status.
- Using AI to detect, infer, or assess individuals’ emotions in workplace or educational settings without explicit consent.
Automated Decision-Making with Significant Effects
- Using Outputs as the sole basis for decisions with legal or similarly significant effects on individuals (such as employment decisions, educational assessments with high-stakes consequences, or access to services) without meaningful human oversight and the ability for individuals to contest decisions.
Misuse of Accessibility Features:
- Using accessibility features (such as captioning or transcription) in ways that undermine their intended purpose or create barriers for individuals with disabilities.
- Deliberately generating inaccurate or misleading captions or transcripts.
Security Violations:
- Attempting to bypass, disable, or manipulate AI safety filters, content moderation systems, or security mechanisms.
- Reverse engineering, extracting, or attempting to recreate AI Systems or proprietary systems.
- Misusing, damaging, interfering with, or disrupting Kaltura’s infrastructure, services, or other customers’ use of the platform.
- Creating or distributing spam, phishing attempts, malware, or other malicious content through Kaltura’s platform.
- Evolving Together
Kaltura sees the advancement of AI as a collaborative endeavor. We value the ongoing dialogue with our customers, partners, and the larger community. Your input is essential as we navigate the opportunities and challenges of AI technology together.
- Reporting Violations and Concerns
Kaltura is committed to maintaining the integrity and safety of our AI Systems. If you:
- Encounter Outputs that violate this policy or applicable laws
- Identify potential biases or harmful behaviors in AI Systems
- Discover security vulnerabilities or safety concerns
- Wish to report misuse of AI Systems
- Have questions about appropriate use of AI Systems
Please report your concerns to: legal@kaltura.com
- Principles Updates
This document will be reviewed and updated periodically to reflect the evolving nature of AI technology, best practices and applicable regulatory frameworks. Customers are encouraged to consult the latest version of this document on the Kaltura website.
- Contact Us
If you have any questions about Kaltura’s AI principles, would like to learn more about your AI preferences, or wish to opt out of specific AI features, please don’t hesitate to contact us at: legal@kaltura.com.