Bring your own transformers in Swiftide

Published: at by Timon Vonk

Bring your own transformers in Swiftide

Table of Contents

Swiftide enables you to build indexing pipelines in a modular fashion, allowing for experimentation and blazing fast, production ready performance for Retrieval Augmented Generation (RAG).

Several built-in transformers are available, and you can also bring your own. We will take a look at different ways to customize the indexing pipeline to suit your needs. The style of customization also applies to the experimental query pipeline.

To learn more about Swiftide, head over to swiftide.rs or check us out on github

Understanding transformers

Transformers in Swiftide modify ingested nodes, adding and changing metadata and creating more nodes (chunking). Each different kind of modification is expressed with a trait.

A node represents a single piece of data moving through the indexing pipeline. It has a computed id, the chunk, metadata, embeddings, and several other helper fields.

For example, transforming a single node requires you to implement the transformer trait, which is defined as follows:

pub trait Transformer: Send + Sync {
async fn transform_node(&self, node: Node) -> Result<Node>;
// Overrides the default concurrency of the pipeline
fn concurrency(&self) -> Option<usize> {
None
}
}

You can find all current trait definitions in swiftide-core on github.

A new transformer gets access to the full node, can modify it, propagate any errors, and is expected to return the node when done.

The concurrency setting can be used to tune the maximum amount of concurrent requests this transformer can take. If not specified, concurrency is taken from the pipeline and by default is equal to the amount of cpus available.

Additionally, single and batch transformers need to be marked with WithIndexingDefaults and WithBatchIndexingDefaults respectively. The trait does not have to be implemented. However, if you do, you can have your transformer work with defaults passed from the pipeline.

Adding metadata to a node with a closure

The trait to transform a single node is defined as follows:

// Transforms single nodes into single nodes
pub trait Transformer: Send + Sync {
async fn transform_node(&self, node: Node) -> Result<Node>;
// Overrides the default concurrency of the pipeline
fn concurrency(&self) -> Option<usize> {
None
}
}

Closures implement the trait as well, allowing for quick, ad hoc transformations without having to implement a full transformer.

For example, we can add a hello world to every node:

pipeline
.then(move |mut node: Node| {
node.metadata.insert("Hello", "world");
Ok(node)
})

Under the hood, metadata is a btreemap with String as keys and serde::Value as values. This gives a lot of flexibility for programmatic transformation and rendering in prompts, and persisting to different datatypes.

Building your own transformer

Let’s implement the trait for a transformer that will ask an LLM to explain the content of the node ‘like-I-am-five’. We will use the attribute macro to generate boilerplate for the LLM client, derive a builder, Default, Debug and handle any pipeline defaults. The macro has several options for customization. Note that the macro is not required.

The macro uses derive_builder under the hood to generate nice, consistent apis. We can use the helper macros to further customize our transformer.

#[swiftide_macros::indexing_transformer]
struct ExplainLikeIAmFive {
#[builder(default = "default_prompt()")]
prompt_template: PromptTemplate
}
fn default_prompt() -> PromptTemplate {
// Here we define the prompt to be used. The syntax is like Jinja, using `Tera` under the hood.
r"
Explain the following text like I am five. Only respond with your explanation and nothing else.
{{node.chunk}}
".into()
}
impl Transformer for ExplainLikeIAmFive {
async fn transform_node(&self, node: Node) -> Result<Node> {
// We transform the template to a prompt, and add the node as context
let prompt: Prompt = self.prompt_template.to_prompt().with_node(&node);
// Prompt uses `SimplePrompt`, which takes a string and returns a string
let answer: String = self.prompt(prompt).await?;
node.metadata.insert("ExplainLikeIAmFive", answer);
Ok(node)
}
}

That should do it! You can now run the transformer as follows:

// If a default llm on the pipeline has been set with `.with_default_llm_client`
pipeline
.with_default_llm_client(openai)
.then(ExplainLikeIAmFive::default())
// Or if you want to pass it manually
pipeline
.then(ExplainLikeIAmFive::from_client(openai).build()?)

Implementing batching transformers and chunkers follows a similar pattern.

Using the metadata downstream

All of Swiftide’s transformers have the full node available in the prompt. This allows for easy access to any metadata added by previous transformers.

For example, you can access the previous metadata in a custom prompt with {{ node.metadata["ExplainLikeIAmFive"] }}

By default a single embedding is created on the combined chunk and metadata. Other embedding strategies are also supported.

This means that by adding metadata relevant to the chunk, the retrievability of the chunk is improved right away.

Finally, storage that supports it has all metadata persisted.

Happy hacking!

To learn more about Swiftide, head over to swiftide.rs or check us out on github