1주차 랭체인 과제를 모두 마무리하게 되어 결과물을 공유합니다!
평소에 파이썬을 잘 사용하지 않는 소프트웨어 개발자인데, 과제를 따라하면서 파이썬도 조금씩 익히고 랭체인에 대한 이해도 또한 높아질 수 있는 계기가 되었습니다~! 모두 힘내서 마지막까지 과제를 완수해 봅시다!
랭체인 1-1. 프롬프트와 LLM을 사용한 체인 구성
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
# 1. 사용자 입력을 받아 프롬프트에 추가하고, 모델로 전달하여 원시 모델 출력을 반환하는 체인을 만드는 방법을 배웁니다.
# 2. 먼저, 필요한 라이브러리를 설치하고, 프롬프트 템플릿과 LLM을 결합하여 기본 체인을 구성합니다.
# 3. 그 다음, 출력 포맷을 변환하기 위해 출력 파서를 추가하고, 입력을 단순화하기 위한 구성 요소를 추가하는 방법을 배웁니다.
prompt = input()
prompt_template = PromptTemplate.from_template(prompt)
llm = ChatOpenAI()
output_parser = StrOutputParser()
chain = prompt_template | llm | output_parser
result = chain.invoke({})
print(result)
python3 chain_composition.py
곰에 관한 농담을 해줘.
왜 곰이 항공편을 타지 않는 이유가 뭐야?
- 비행기가 곰방울이라서!
비행기가 곰방울…? 미국식 개그가 오역된 것 일까요?
랭체인 1-2. 여러 체인을 연결하여 복잡한 질문에 답하기
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
# 두개 이상의 체인을 사용하여 질문에 답하는 방법을 배웁니다.
# 1. 첫 번째 체인은 특정 인물이 태어난 도시를 찾습니다.
# ex) “what is the city Barack Obama is from?”
# 2. 두 번째 체인은 해당 도시가 속한 국가를 찾는데 이때 입력받은 언어로 답해야합니다.
# ex) “what country is the city Seoul in? respond in french”
city_prompt = PromptTemplate.from_template(
"""
what is the city {name} is from? say only city name
"""
)
result_prompt = PromptTemplate.from_template(
"""
In which country is {city} where {name} lived? respond in {language}. and say full sentence
"""
)
llm = ChatOpenAI()
output_parser = StrOutputParser()
city_chain = city_prompt | llm | output_parser
result_chain = result_prompt | llm | output_parser
if __name__ == '__main__':
name = input("who is the person? : ")
language = input("what language? : ")
city = city_chain.invoke({"name": name})
result = result_chain.invoke({"city": city, "name": name, "language": language})
print(result)
python3 multi_chain_query.py
who is the person? : Soon-Shin Lee
what language? : English
Seoul, where Soon-Shin Lee lived, is located in South Korea.
영어로 프롬프트를 구성하는 것이 ChatGPT가 이해하는데 수월할 것 같아 최대한 영어로 작성해보려 합니다.
랭체인 1-3. 분기 및 병합을 활용한 체인 구성
from dotenv import load_dotenv
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableParallel
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
load_dotenv()
# 1. 하나의 입력을 받아 여러 컴포넌트에서 동시에 처리하고, 이후 다른 컴포넌트에서 결과를 합쳐 최종 응답을 생성하는 체인을 만드는 방법을 배웁니다.
# 이를 통해 복잡한 계산 그래프를 구성하고, 다양한 관점에서 문제를 분석할 수 있습니다.
# 2. 먼저, 주제에 대한 논쟁을 생성하는 컴포넌트를 구성합니다. 그 다음, 이 논쟁의 긍정적 측면과 부정적 측면을 나누어 평가하는 분기를 만듭니다.
# 마지막으로, 이러한 분석을 바탕으로 종합적인 응답을 생성하는 컴포넌트를 추가합니다.
# 3. 이 과정을 통해, 복잡한 문제에 대해 다각도로 분석하고, 여러 분석 결과를 종합하여 의사결정을 내리는 과정을 학습할 수 있습니다.
topic_prompt = PromptTemplate.from_template(
"""
What is the debate about the {subject}. Just tell me the topic of debate.
"""
)
positive_prompt = PromptTemplate.from_template(
"""
What is the positive aspects of the {subject} {topic} topic.
"""
)
negative_prompt = PromptTemplate.from_template(
"""
What is the negative aspects of the {subject} {topic} topic.
"""
)
result_prompt = PromptTemplate.from_template(
"""
Please provide a {subject} {topic} topic comprehensive conclusion based on positive and negative aspects.
respond in Korean.
positive aspects: {positive}
negative aspects: {negative}
"""
)
llm = ChatOpenAI()
output_parser = StrOutputParser()
topic_chain = topic_prompt | llm | output_parser
positive_chain = positive_prompt | llm | output_parser
negative_chain = negative_prompt | llm | output_parser
result_chain = result_prompt | llm | output_parser
if __name__ == '__main__':
subject = input()
topic = topic_chain.invoke({"subject": subject})
parallel = RunnableParallel(positive=positive_chain, negative=negative_chain)
parallel_result = parallel.invoke({"subject": subject, "topic": topic})
result = result_chain.invoke({
"subject": subject, "topic": topic,
"positive": parallel_result["positive"], "negative": parallel_result["negative"],
})
print(result)
python3 branching_merging.py
스크럼
종합적으로 볼 때, 스크럼은 복잡한 프로젝트를 관리하는 데 효과적인 프레임워크로 인정받고 있지만 몇 가지 부정적인 측면도 존재합니다.
긍정적인 측면으로는, 스크럼은 팀원 간의 소통과 협력을 증진시키는 데 도움을 주며, 유연성과 적응성을 갖추고 있습니다. 또한 프로젝트 진행 상황을 투명하게 보여주고 지속적인 개선을 장려하며 가치 중심적인 접근을 강조합니다. 이러한 이점들은 스크럼을 복잡한 프로젝트를 성공적으로 관리하는 데 도움을 줍니다.
하지만 부정적인 측면으로는, 일부 비파괴적인 면이 존재합니다. 스크럼의 엄격한 구조와 고정된 역할은 모든 프로젝트나 팀에 적합하지 않을 수 있으며, 프로세스에 과도하게 집착하거나 규뢰 부족이 있을 경우 효과적으로 작동하지 않을 수 있습니다. 또한 스크럼의 구체적인 지침이 부족하다는 비파괴성도 있습니다.
이러한 긍정적인 측면과 부정적인 측면을 종합적으로 고려할 때, 스크럼은 효과적인 프로젝트 관리를 위한 좋은 프레임워크이지만, 모든 프로젝트에 적합하다고 단정하기는 어렵다는 점을 염두에 두어야 합니다. 각 프로젝트나 조직의 특성에 맞게 적절히 적용해야만 최상의 결과를 얻을 수 있을 것입니다.
실행시간을 줄여보고자 RunnableParallel을 사용해 보았습니다. (그래도 시간이 꽤 걸립니다…)
마지막 결론을 담당하는 chain에서 ChatGPT 4를 이용하면 좀 더 좋은 결과가 도출될 수 있을 것 같네요!
랭체인 1-4. 체인에 메모리 추가하기
from dotenv import load_dotenv
from langchain.memory import ConversationBufferMemory
from langchain_openai import ChatOpenAI
from langchain.chains import ConversationChain
load_dotenv()
# 1. 어떻게 임의의 체인에 메모리를 추가하여 대화의 맥락을 유지할 수 있는지 배웁니다.
# 이 과정에서 메모리 클래스를 사용하고 수동으로 연결하는 방법을 실습합니다.
# 2. 첫 번째 단계로, 메모리에 사용자의 입력과 챗봇의 응답을 저장하는 방법을 배우고,
# 이를 통해 이전 대화의 맥락을 다음 대화에 활용하는 방법을 실습합니다.
# 3. 예를 들어, 사용자가 "hi im bob"이라고 입력했을 때, 이 정보를 메모리에 저장하고,
# 다음 대화에서 "whats my name"이라는 질문에 "Your name is Bob."라고 응답하는 과정을 통해 메모리의 사용 방법을 배웁니다.
llm = ChatOpenAI()
conversation = ConversationChain(
llm=llm,
memory=ConversationBufferMemory(),
)
if __name__ == '__main__':
print("hi im bob")
message = conversation.predict(input="hi im bob")
print(message)
print("whats my name")
message = conversation.predict(input="whats my name")
print(message)
python3 add_memory_to_chain.py
hi im bob
Hello Bob! It's nice to meet you. How can I assist you today?
whats my name
Your name is Bob, as you mentioned earlier. Is there anything else I can help you with, Bob?
ConversationChain과 ConversationBufferMemory를 활용하여 손쉽게 구현해보았습니다.
랭체인 1-5. SQL 데이터베이스 쿼리하기
from dotenv import load_dotenv
from langchain.chains import create_sql_query_chain
from langchain_community.agent_toolkits import create_sql_agent
from langchain_community.tools import QuerySQLDataBaseTool
from langchain_community.utilities import SQLDatabase
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
load_dotenv()
# 1. 사용자의 질문에 답하기 위해 SQL 데이터베이스를 쿼리하는 방법을 배웁니다.
# 이 과정에서는 SQL 쿼리를 생성하여 데이터베이스에 질문을 하고, 결과를 얻어 자연어로 응답을 생성하는 전체 과정을 실습합니다.
# 2. 먼저, Chinook 샘플 데이터베이스를 사용하여 테이블 스키마를 가져오고, 이를 기반으로 SQL 쿼리를 작성합니다.
# 그 다음, 작성된 쿼리를 실행하여 결과를 얻고, 이 결과를 바탕으로 자연어 응답을 생성합니다.
# 3. 예를 들어, "How many employees are there?"라는 질문에 대해 SQL 쿼리를 생성하고,
# 실행 결과를 바탕으로 "There are 8 employees."라는 응답을 생성하는 방법을 배웁니다.
llm = ChatOpenAI()
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
# agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools")
query_chain = create_sql_query_chain(llm, db) | QuerySQLDataBaseTool(db=db)
result_prompt = PromptTemplate.from_template(
"""
User question: {question}
Database response: {query_result}
Answer user question by looking at the database search results.
"""
)
result_chain = RunnablePassthrough.assign(query_result=query_chain) | result_prompt | llm | StrOutputParser()
if __name__ == '__main__':
question = input()
result = result_chain.invoke({"question": question})
print(result)
python3 query_sql_db.py
How many employees are there?
There are 8 employees.
이미 구현된 create_sql_agent를 활용하면 쉬운 구현이 가능했었습니다!
랭체인 2-1. 벡터 저장소를 활용한 유사도 검색
from dotenv import load_dotenv
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Chroma
from langchain_openai.embeddings import OpenAIEmbeddings
load_dotenv()
# 비구조화된 데이터를 저장하고 검색하는 가장 일반적인 방법 중 하나는 데이터를 임베딩하여 생성된 벡터를 저장하고,
# 검색 시점에 쿼리를 임베딩하여 '가장 유사한' 임베딩 벡터를 검색하는 것입니다.
# 이 실습에서는 벡터 저장소를 사용하여 임베딩된 데이터를 저장하고 벡터 검색을 수행하는 방법을 배웁니다.
#
# 1. 첫 번째 단계에서는 문서(state_of_the_union.txt)를 로드하고 적절한 크기로 나눠줍니다.
# 이후, 이를 임베딩하여 벡터 저장소에 로드하는 과정을 합니다.
# 두 번째 단계에서는 주어진 쿼리에 대해 유사도 검색을 수행하고, '가장 유사한' 결과를 검색하여 출력합니다.
# 2. 예를 들어, "What did the president say about Ketanji Brown Jackson"이라는 쿼리에 대해
# **유사도 검색(similarity_search)**을 수행하고,
# 두번째로 **최대 한계 관련성 검색(MMR)**으로 2개의 검색된 문서의 내용을 출력하는 과정을 실습합니다
full_text = TextLoader("state_of_the_union.txt")
text_splitter = CharacterTextSplitter()
embedding = OpenAIEmbeddings()
split_texts = full_text.load_and_split(text_splitter)
chroma_db = Chroma.from_documents(split_texts, embedding)
similarity_retriever = chroma_db.as_retriever(search_type="similarity", search_kwargs={"k": 1})
mmr_retriever = chroma_db.as_retriever(search_type="mmr", search_kwargs={"k": 2})
if __name__ == '__main__':
query = "What did the president say about Ketanji Brown Jackson"
similarity_result = similarity_retriever.invoke(query)
mmr_result = mmr_retriever.invoke(query)
print("--------------similarity_search-----------------")
print(similarity_result[0].page_content)
print("\n")
print("--------------mmr_search-----------------")
for i in range(len(mmr_result)):
print(f"{i + 1}. {mmr_result[i].page_content}")
print("\n")
python3 vector_store_search.py
Number of requested results 20 is greater than number of elements in index 11, updating n_results = 11
--------------similarity_search-----------------
They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope.
We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities.
I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.
And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced.
And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon?
Ban assault weapons and high-capacity magazines.
Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued.
These laws don’t infringe on the Second Amendment. They save lives.
The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault.
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
--------------mmr_search-----------------
1. They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope.
We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities.
I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.
And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced.
And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon?
Ban assault weapons and high-capacity magazines.
Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued.
These laws don’t infringe on the Second Amendment. They save lives.
The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault.
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
2. We are the only nation on Earth that has always turned every crisis we have faced into an opportunity.
The only nation that can be defined by a single word: possibilities.
So on this night, in our 245th year as a nation, I have come to report on the State of the Union.
And my report is this: the State of the Union is strong—because you, the American people, are strong.
We are stronger today than we were a year ago.
And we will be stronger a year from now than we are today.
Now is our moment to meet and overcome the challenges of our time.
And we will, as one people.
One America.
The United States of America.
May God bless you all. May God protect our troops.
splitter의 옵션값과 embedding의 모델, 벡터 db의 종류 등에 따라 다양한 결과 및 성능이 크게 차이날 것 같습니다. 시간이 되면 다양하게 조절해보며 성능을 비교해보면 좋을 것 같습니다.
랭체인 2-2. 검색-강화 생성 체인 구축하기
from dotenv import load_dotenv
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
from langchain_openai.embeddings import OpenAIEmbeddings
load_dotenv()
# "검색-강화 생성" 체인을 통해 특정 질문에 대한 컨텍스트를 검색하고, 이를 기반으로 질문에 답하는 방법을 배웁니다.
# 이 과정에서 벡터 저장소를 활용하여 관련 컨텍스트를 검색하고, 검색된 컨텍스트를 사용하여 모델이 질문에 답하도록 합니다.
# 1. “harrison worked at kensho”라는 문장을 FAISS에 저장합니다.
# 2. "where did harrison work?”라는 질문에 대한 답변을 출력합니다.
# 3. 두번째 체인에서는 “where did harrison work?”이라는 질문에 대한 답변을 이탈리아어로 답변을 출력합니다.
embedding = OpenAIEmbeddings()
faiss_db = FAISS.from_texts(["harrison worked at kensho"], embedding=embedding)
retriever = faiss_db.as_retriever(search_type="similarity")
prompt_template = PromptTemplate.from_template(
"""
Answer the question based only on the following context in italian:
{context}
Question: {question}
"""
)
italian_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt_template
| ChatOpenAI()
| StrOutputParser()
)
if __name__ == '__main__':
query = "where did harrison work?"
result = retriever.invoke(query)
print(f"Based on the provided context, {result[0].page_content}")
italian_result = italian_chain.invoke(query)
print(italian_result)
python3 retrieval_augmented_chain.py
Based on the provided context, harrison worked at kensho
Harrison ha lavorato a Kensho.
랭체인 2-3. 에이전트와 실행기를 이용한 자동화된 작업 처리
from dotenv import load_dotenv
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_core.runnables import RunnablePassthrough
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
load_dotenv()
# 1. Runnable을 에이전트로 전달하여 특정 작업을 자동화하는 방법을 배웁니다.
# 이 과정에서는 사용자의 질문이나 요청에 따라 적절한 도구를 선택하고,
# 그 결과를 처리하여 최종 응답을 생성하는 에이전트의 구성과 실행 방법을 실습합니다.
# 2. 다음 임의의 도구를 가정하여 질문 “"whats the weather in New york?” 라는 질문에 대해 응답을 생성하는 과정을 실습합니다.
@tool()
def search_weather(query: str) -> str:
"""Search about weather."""
return "32 degrees"
tavily = TavilySearchResults(max_results=2)
tools = [tavily, search_weather]
llm = ChatOpenAI()
prompt = hub.pull("hwchase17/openai-functions-agent")
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_chain = {"input": RunnablePassthrough()} | agent_executor
if __name__ == '__main__':
question = input()
response = agent_chain.invoke(question)
print(response["output"])
python3 automated_task_agent.py
whats the weather in New york?
> Entering new AgentExecutor chain...
Invoking: `search_weather` with `{'query': 'weather in New York'}`
32 degreesThe current weather in New York is 32 degrees.
> Finished chain.
The current weather in New York is 32 degrees.
python3 automated_task_agent.py
What is the gpters
> Entering new AgentExecutor chain...
Invoking: `tavily_search_results_json` with `{'query': 'gpters'}`
[{'url': 'https://portal.gpters.org/', 'content': '국내 최대 AI 커뮤니티 지피터스 AI캠프 chatGPT를 포함하여 세상을 송두리채 바꾸고 있는 AI 툴을 배우고 실습하며 생산성을 극대화해anization/gpters', 'content': 'GPTers began as a minuscule chat GPT study and has developed into a group that boasts of being the operator of the AI community.'}]GPTers is a community that includes chatGPT and is known as the largest AI community in Korea. They offer AI tools for learning and practical applications to maximize productivity. You can learn more about GPTers on their [official website](https://portal.gpters.org/).
> Finished chain.
GPTers is a community that includes chatGPT and is known as the largest AI community in Korea. They offer AI tools for learning and practical applications to maximize productivity. You can learn more about GPTers on their [official website](https://portal.gpters.org/).
이번 과제에서는 agent 생성을 위해 32 degrees 만을 리턴하는 tool과 Tavily 검색 tool을 준비했습니다. 기대한 대로 날씨를 질문할 땐 전자를 이용하고 다른 질문을 할때는 후자를 이용하는 모습입니다. 두개의 tool을 선별하는 과정은 구체적으로는 모르겠네요. docstring과 tool의 이름 같은 것들 일까요?
과제를 의도에 맞게 잘 수행한 건진 모르겠지만… 하다보니 처음보다는 확실히 익숙해져서 뿌듯합니다.
랭체인 도큐먼트 사이트, 위키독스, 지피터스에 잘 정리된 글 등이 과제하면서 많은 도움이 됐습니다.
혹시 과제에 대해 더 좋은 방안이나 피드백, 리뷰가 있으시면 언제든 환영입니다!
많은 가르침 부탁드립니다. 감사합니다.
#11기 랭체인