Encoding Settings
General
Open Encoding Settings for the project.
Encoding Activation
Activate Encoding (Must have) Enables automatic encoding after media upload.
Default Source File Language Defines the language of the original audio track.
Subtitles and Language Processing
Generate Subtitles Automatically Automatically creates subtitles from the source audio.
Automatically Translated Subtitles Select the target languages for automatic subtitle translation.
Automatic Dubbing Optional. Can be enabled if dubbing is required.
Output Formats

Under Encoding Settings → Output Formats, define which renditions are generated during encoding.
The platform provides a preconfigured default setup that covers the requirements of most standard use cases. This default configuration is suitable for typical editorial, corporate, and informational video content and does not require adjustment in most scenarios.
A commonly used setup includes the following resolutions:
1080p
720p
480p
360p
240p
Higher resolutions such as 1440p or 2160p can be enabled if required, but should only be activated when necessary, as they increase encoding time, storage usage, and delivery bandwidth.
H.264 and other output formats can be combined. Mixed codec formats are created automacially and merged as combined streaming format in HLS and DASH.
Custom Encoding Profiles
For specialized content types, such as sports, live recordings, or high-motion footage, the default settings can be adapted:
Target video bitrate can be adjusted per rendition
Frame rates such as 50 fps or 60 fps can be configured (We recommend to use the default setting passthrough)
Audio bitrate, sample rate, and channel layout can be modified
Video profile and codec settings can be fine-tuned
These options allow customers to optimize output quality for specific content characteristics while maintaining full control over resource usage and playback compatibility.
Advanced Options
Under Encoding Settings → Advanced Settings:
CMAF Packaging (Fragmented MP4) Recommended for modern HLS and DASH delivery. (Default in the near future.)
Cover Creation Control Automatically generates multiple thumbnails at defined intervals.
Normalize Audio Optional loudness normalization.
Include Audio-only Version Adds an audio-only track to the streaming manifest.
Watermark Configuration
Under Encoding Settings → Watermark:
Upload a watermark image (PNG with transparency recommended).
Enable Use Watermarks.
Select position and transparency level.
Watermarks are applied during encoding.
Glossary

The AI Glossary feature allows you to improve the accuracy of automatically generated subtitles and AI-based audio processing by providing a custom pronunciation and terminology reference.
An AI glossary can be uploaded in CSV or JSON format and defines specific terms, names, abbreviations, or technical expressions, along with their intended pronunciation. This is especially useful for:
Proper names (e.g. people, brands, locations)
Technical terminology
Foreign words or abbreviations
Domain-specific vocabulary
Uploading an AI Glossary
To upload a glossary:
Open Encoding Settings → AI Glossary
Select a glossary file in CSV or JSON format
Specify the language the glossary applies to
Upload the file
Once uploaded, the glossary is automatically applied during subtitle generation and AI-driven language processing for this project.
Supported File Formats
CSV format Each entry consists of a term and one or more pronunciation examples.
JSON format Allows structured definitions with multiple pronunciation variants per term.
Example structures are displayed directly in the interface to ensure correct formatting.
Scope and Behavior
The glossary applies project-wide to all newly processed media
Existing subtitles are not retroactively updated
Multiple glossary entries can be maintained and updated over time
If no glossary is uploaded, standard AI language processing is used
Using an AI glossary helps ensure consistent terminology and significantly improves subtitle quality for specialized or branded content.
Last updated