Overview of Content Pipeline
LightAct uses a flexible and powerful multi-stage content pipeline that has been fine-tuned for performance. In this article we will go through the basic concepts, terminology and how it all comes together.
Content Flow in LightAct
Content ingestion nodes in Layer Layouts
As a media server, perhaps the most standard way for LightAct to ingest content is through Layer nodes such as Get Texture (for image files), Play video (for video files and image sequences) and Play Notch Block (for Notch blocks).
Content IO Device nodes (in Devices window)
LightAct can also ingest content through Device nodes such as NDI, Spout or Unreal Texture Share, or through Layer nodes that allow you to play video or image files or Notch blocks.
Once you have your content in a layer, there are several ways it can go.
You can render your content (usually referred to as a Texture) to a Canvas using Render to Canvas node. This is the most standard and recommended way as it gives you the most options afterwards.
You can also push your content directly to Throwers, Viewport objects (such as video screens or 3D models) or Device Sender nodes (such NDI Sender node, for example). To do that, use Set Texture node.
Viewport objects are virtual replicas of real objects. There are 4 different types of it:
- video screens,
- DMX fixtures,
- throwers and
- 3D models.
For a diagram of various flow options, please refer to the image at the top of this article, but for a bulleted list read on.
- Video screens can get content from:
- DMX fixtures can get content from
- video screens
- Projectors can get content from
- video screens
- 3D models
- 3D models can get content from