To use this model you need to have the node-llama-cpp module installed.
This can be installed using npm install -S node-llama-cpp and the minimum
version supported in version 2.0.0.
This also requires that have a locally built version of Llama2 installed.
Example
// Initialize the ChatLlamaCpp model with the path to the model binary file. constmodel = newChatLlamaCpp({ modelPath:"/Replace/with/path/to/your/model/gguf-llama2-q4_0.bin", temperature:0.5, });
// Call the model with a message and await the response. constresponse = awaitmodel.call([ newHumanMessage({ content:"My name is John." }), ]);
// Log the response to the console. console.log({ response });
To use this model you need to have the
node-llama-cpp
module installed. This can be installed usingnpm install -S node-llama-cpp
and the minimum version supported in version 2.0.0. This also requires that have a locally built version of Llama2 installed.Example