Optional
criterionOptional
evaluationOptional
llmOptional
memoryOptional
skipCheck if the evaluation arguments are valid.
Optional
reference: stringThe reference label.
Optional
input: stringThe input string.
If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.
Evaluate Chain or LLM output, based on optional input and label.
Optional
callOptions: unknownOptional
config: anyThe evaluation results containing the score or value. It is recommended that the dictionary contain the following keys:
Invoke the chain with the provided input and returns the output.
Input values for the chain run.
Optional
config: anyOptional configuration for the Runnable.
Promise that resolves with the output of the chain run.
Format prompt with values and pass to LLM
keys to pass to prompt template
Optional
callbackManager: anyCallbackManager to use
Completion from LLM.
llm.predict({ adjective: "funny" })
Static
deserializeStatic
fromLLMCreate a new instance of the CriteriaEvalChain.
Optional
criteria: CriteriaLikeOptional
chainOptions: Partial<Omit<LLMEvalChainInput<EvalOutputType, BaseLanguageModelInterface>, "llm">>Options to pass to the constructor of the LLMChain.
Static
resolveResolve the criteria to evaluate.
Optional
criteria: CriteriaLikeThe criteria to evaluate the runs against. It can be:
- a mapping of a criterion name to its description
- a single criterion name present in one of the default criteria
- a single ConstitutionalPrinciple
instance
A dictionary mapping criterion names to descriptions.
Static
resolveGenerated using TypeDoc
⚠️ Deprecated ⚠️
Use .batch() instead. Will be removed in 0.2.0.
This feature is deprecated and will be removed in the future.
It is not recommended for use.
Call the chain on all inputs in the list