Bases: BaseComponent
Perform sequential chain-of-thought with manual pre-defined prompts
This method supports variable number of steps. Each step corresponds to a
kotaemon.pipelines.cot.Thought
. Please refer that section for
Thought's detail. This section is about chaining thought together.
Usage:
Create and run a chain of thought without "+" operator:
| >>> from kotaemon.pipelines.cot import Thought, ManualSequentialChainOfThought
>>> llm = LCAzureChatOpenAI(...)
>>> thought1 = Thought(
>>> prompt="Word {word} in {language} is ",
>>> post_process=lambda string: {"translated": string},
>>> )
>>> thought2 = Thought(
>>> prompt="Translate {translated} to Japanese",
>>> post_process=lambda string: {"output": string},
>>> )
>>> thought = ManualSequentialChainOfThought(thoughts=[thought1, thought2], llm=llm)
>>> thought(word="hello", language="French")
{'word': 'hello',
'language': 'French',
'translated': '"Bonjour"',
'output': 'こんにちは (Konnichiwa)'}
|
Create and run a chain of thought without "+" operator: Please refer the
kotaemon.pipelines.cot.Thought
section for examples.
This chain-of-thought optionally takes a termination check callback function.
This function will be called after each thought is executed. It takes in a
dictionary of all thought outputs so far, and it returns True or False. If
True, the chain-of-thought will terminate. If unset, the default callback always
returns False.
Source code in libs/kotaemon/kotaemon/llms/cot.py
| class ManualSequentialChainOfThought(BaseComponent):
"""Perform sequential chain-of-thought with manual pre-defined prompts
This method supports variable number of steps. Each step corresponds to a
`kotaemon.pipelines.cot.Thought`. Please refer that section for
Thought's detail. This section is about chaining thought together.
_**Usage:**_
**Create and run a chain of thought without "+" operator:**
```pycon
>>> from kotaemon.pipelines.cot import Thought, ManualSequentialChainOfThought
>>> llm = LCAzureChatOpenAI(...)
>>> thought1 = Thought(
>>> prompt="Word {word} in {language} is ",
>>> post_process=lambda string: {"translated": string},
>>> )
>>> thought2 = Thought(
>>> prompt="Translate {translated} to Japanese",
>>> post_process=lambda string: {"output": string},
>>> )
>>> thought = ManualSequentialChainOfThought(thoughts=[thought1, thought2], llm=llm)
>>> thought(word="hello", language="French")
{'word': 'hello',
'language': 'French',
'translated': '"Bonjour"',
'output': 'こんにちは (Konnichiwa)'}
```
**Create and run a chain of thought without "+" operator:** Please refer the
`kotaemon.pipelines.cot.Thought` section for examples.
This chain-of-thought optionally takes a termination check callback function.
This function will be called after each thought is executed. It takes in a
dictionary of all thought outputs so far, and it returns True or False. If
True, the chain-of-thought will terminate. If unset, the default callback always
returns False.
"""
thoughts: List[Thought] = Param(
default_callback=lambda *_: [], help="List of Thought"
)
llm: LLM = Param(help="The LLM model to use (base of kotaemon.llms.BaseLLM)")
terminate: Callable = Param(
default=lambda _: False,
help="Callback on terminate condition. Default to always return False",
)
def run(self, **kwargs) -> Document:
"""Run the manual chain of thought"""
inputs = deepcopy(kwargs)
for idx, thought in enumerate(self.thoughts):
if self.llm:
thought.llm = self.llm
self._prepare_child(thought, f"thought{idx}")
output = thought(**inputs)
inputs.update(output.content)
if self.terminate(inputs):
break
return Document(inputs)
def __add__(self, next_thought: Thought) -> "ManualSequentialChainOfThought":
return ManualSequentialChainOfThought(
thoughts=self.thoughts + [next_thought], llm=self.llm
)
|
run
Run the manual chain of thought
Source code in libs/kotaemon/kotaemon/llms/cot.py
| def run(self, **kwargs) -> Document:
"""Run the manual chain of thought"""
inputs = deepcopy(kwargs)
for idx, thought in enumerate(self.thoughts):
if self.llm:
thought.llm = self.llm
self._prepare_child(thought, f"thought{idx}")
output = thought(**inputs)
inputs.update(output.content)
if self.terminate(inputs):
break
return Document(inputs)
|