Skip to content

Issue #3489273: Implement Advanced Input Mode with Token Chunking for Text Automator

Closes #3489273

This code handles the UI and configuration storage for the "Advanced Mode (Token, Chunked)" feature. However, to fully implement the functionality, you’ll need to:

  • Create a utility or service to split input text into chunks based on the token_chunk_size. This might involve integrating a tokenizer compatible with your LLM (e.g., TikToken for OpenAI models).
  • Modify the processing logic (likely in AiAutomatorFieldProcessManager or a related class) to:
  • Process each chunk sequentially through the LLM.
  • Pass the output of one chunk as context to the next, refining iteratively.
  • Combine the processed chunks into a cohesive final result, ensuring continuity (e.g., by summarizing or stitching outputs).
Edited by Anjali Prasannan

Merge request reports

Loading