Man

Development & AI | Alper Akgun

Microsoft Autogen

October, 2023

Autogen is a framework for creating LLM applications. It empowers the use of multiple agents that can collaborate to tackle tasks. These agents in Autogen are flexible, can engage in conversations, and readily accommodate human involvement. They can function in different modes, utilizing combinations of LLMs, human inputs, and tools as needed.

Autogen helps LLM applications, making it easy to harness the power of multi-agent conversations. It streamlines the orchestration, automation, and optimization of complex LLM workflows, effectively enhancing the performance of LLM models while mitigating their limitations.

Autogen supports diverse conversation patterns within workflows. Developers can leverage Autogen to craft a wide array of conversation patterns, including considerations for conversation autonomy, the number of agents involved, and agent conversation topology.

AutoGen provides a drop-in replacement of openai.Completion or `openai.ChatCompletion` as an enhanced inference API. It allows performance tuning, utilities like API unification and caching, and advanced usage patterns, such as error handling, multi-config inference, context programming, etc.

Install

pip install pyautogen

Two agent working on charting nvidia vs tesla stock prices.


from autogen import AssistantAgent, UserProxyAgent, config_list_from_json

config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
assistant = AssistantAgent("assistant", llm_config={"config_list": config_list})
user_proxy = UserProxyAgent("user_proxy", code_execution_config={"work_dir": "coding"})
user_proxy.initiate_chat(assistant, message="Plot a chart of NVDA and TESLA stock price change YTD.")
            

Run this by running:


            python autogen.py