AI * ML

LangChain 시작하기 - Agent에 대해서

armyost 2024. 6. 29. 22:50
728x90

우리는 지금까지 각 단계를 사전에 인지하고 있는 체인의 샘플을 만들었습니다. 마지막으로 만들 것은 Agent로, LLM이 어떤 Step을 취할지 결정합니다.

참고: 이 예에서는 로컬 모델이 아직 충분히 신뢰할 수 없기 때문에 OpenAI 모델을 사용하여 Agent를 만드는 방법만 보여줍니다.

Agent를 빌드할 때 가장 먼저 해야 할 일 중 하나는 액세스할 수 있는 도구를 결정하는 것입니다. 이 예에서는 Agent에게 다음 두 가지 도구에 대한 액세스 권한을 부여합니다.

1. 방금 만든 Retriever입니다. 이렇게 하면 질문에 쉽게 답할 수 있습니다
2. 검색 Tool을 사용합니다. 이렇게 하면 최신 정보가 필요한 질문에 쉽게 답변할 수 있습니다.

우선 1번에 대한 실습을 시작합니다.

from langchain.tools.retriever import create_retriever_tool

retriever_tool = create_retriever_tool(
    retriever,
    "langsmith_search",
    "Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)



2번에 대한 실습으로 우리가 사용할 검색 도구는 Tavily입니다. 이를 위해서는 API 키가 필요합니다. 플랫폼에서 만든 후에는 환경 변수로 설정해야 합니다.
※ Tavily Search API는 AI 에이전트(LLM)를 위해 특별히 제작된 검색 엔진으로, 정확하고 사실에 입각한 실시간 결과를 빠르게 제공합니다.

https://tavily.com/

 

Tavily

Introducing our Search API Elevate your AI's capabilities with a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed. The Search API helps connect LLMs and AI applications to trusted and real-

tavily.com

 

export TAVILY_API_KEY=...

 

from langchain_community.tools.tavily_search import TavilySearchResults

search = TavilySearchResults()

tools = [retriever_tool, search]



이제 이를 사용하여 미리 정의된 프롬프트를 얻을 수 있습니다

pip install langchainhub
pip install langchain-openai
from langchain_openai import ChatOpenAI
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor

# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")

# You need to set OPENAI_API_KEY environment variable or pass it as argument `api_key`.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)




이제 에이전트를 호출하고 어떻게 응답하는지 확인할 수 있습니다! LangSmith에 대해 질문할 수 있습니다.

 

agent_executor.invoke({"input": "how can langsmith help with testing?"})
agent_executor.invoke({"input": "what is the weather in SF?"})

 

langsmith에 대해서는 상세한 답변을 받을 수 있지만,

weather에 대한 질문에는 시니컬하다.

{'input': 'what is the weather in SF?',
 'output': "I'm sorry, but I am not able to provide real-time weather information. You can check the weather in San Francisco by using a weather website or app like Weather.com or AccuWeather."}

 



만약 대화형을 원한다면 다음과 같이 작성할 수 있습니다.

chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
agent_executor.invoke({
    "chat_history": chat_history,
    "input": "Tell me how"
})