source stringclasses 1
value | repository stringclasses 1
value | file stringlengths 17 99 | label stringclasses 1
value | content stringlengths 11 13.3k |
|---|---|---|---|---|
GitHub | autogen | autogen/README.md | autogen | <a name="readme-top"></a> [](https://badge.fury.io/py/pyautogen) [](https://github.com/microsoft/autogen/actions/workflows/python-package.yml) . <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight: bold;"> ↑ Back to Top ↑ ... |
GitHub | autogen | autogen/README.md | autogen | Quickstart The easiest way to start playing is 1. Click below to use the GitHub Codespace [](https://codespaces.new/microsoft/autogen?quickstart=1) 2. Copy OAI_CONFIG_LIST_sample to ./notebook folder, name to OAI_CONFIG_LIST, and set the correc... |
GitHub | autogen | autogen/README.md | autogen | [Installation](https://microsoft.github.io/autogen/docs/Installation) ### Option 1. Install and Run AutoGen in Docker Find detailed instructions for users [here](https://microsoft.github.io/autogen/docs/installation/Docker#step-1-install-docker), and for developers [here](https://microsoft.github.io/autogen/docs/Contr... |
GitHub | autogen | autogen/README.md | autogen | Multi-Agent Conversation Framework Autogen enables the next-gen LLM applications with a generic [multi-agent conversation](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat) framework. It offers customizable and conversable agents that integrate LLMs, tools, and humans. By automating chat among multiple ca... |
GitHub | autogen | autogen/README.md | autogen | Enhanced LLM Inferences Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers [enhanced LLM inference](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#api-unification) with powerful functionalities like caching, error handling, multi-config inferen... |
GitHub | autogen | autogen/README.md | autogen | Documentation You can find detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/). In addition, you can find: - [Research](https://microsoft.github.io/autogen/docs/Research), [blogposts](https://microsoft.github.io/autogen/blog) around AutoGen, and [Transparency FAQs](https://github.com/mi... |
GitHub | autogen | autogen/README.md | autogen | Related Papers [AutoGen](https://arxiv.org/abs/2308.08155) ``` @inproceedings{wu2023autogen, title={AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework}, author={Qingyun Wu and Gagan Bansal and Jieyu Zhang and Yiran Wu and Beibin Li and Erkang Zhu and Li Jiang and Xiaoyun Zh... |
GitHub | autogen | autogen/README.md | autogen | Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit <https://cla.opensource.microsoft.com>. If you are ... |
GitHub | autogen | autogen/README.md | autogen | Contributors Wall <a href="https://github.com/microsoft/autogen/graphs/contributors"> <img src="https://contrib.rocks/image?repo=microsoft/autogen&max=204" /> </a> <p align="right" style="font-size: 14px; color: #555; margin-top: 20px;"> <a href="#readme-top" style="text-decoration: none; color: blue; font-weight:... |
GitHub | autogen | autogen/SECURITY.md | autogen | <!-- BEGIN MICROSOFT SECURITY.MD V0.0.8 BLOCK --> |
GitHub | autogen | autogen/SECURITY.md | autogen | Security Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://... |
GitHub | autogen | autogen/SECURITY.md | autogen | Reporting Security Issues **Please do not report security vulnerabilities through public GitHub issues.** Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). If you prefer to submit without loggi... |
GitHub | autogen | autogen/SECURITY.md | autogen | Preferred Languages We prefer all communications to be in English. |
GitHub | autogen | autogen/SECURITY.md | autogen | Policy Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). <!-- END MICROSOFT SECURITY.MD BLOCK --> |
GitHub | autogen | autogen/TRANSPARENCY_FAQS.md | autogen | # AutoGen: Responsible AI FAQs |
GitHub | autogen | autogen/TRANSPARENCY_FAQS.md | autogen | What is AutoGen? AutoGen is a framework for simplifying the orchestration, optimization, and automation of LLM workflows. It offers customizable and conversable agents that leverage the strongest capabilities of the most advanced LLMs, like GPT-4, while addressing their limitations by integrating with humans and tools ... |
GitHub | autogen | autogen/TRANSPARENCY_FAQS.md | autogen | What can AutoGen do? AutoGen is an experimentational framework for building a complex multi-agent conversation system by: - Defining a set of agents with specialized capabilities and roles. - Defining the interaction behavior between agents, i.e., what to reply when an agent receives messages from another agent. The a... |
GitHub | autogen | autogen/TRANSPARENCY_FAQS.md | autogen | What is/are AutoGen’s intended use(s)? Please note that AutoGen is an open-source library under active development and intended for use for research purposes. It should not be used in any downstream applications without additional detailed evaluation of robustness, safety issues and assessment of any potential harm o... |
GitHub | autogen | autogen/TRANSPARENCY_FAQS.md | autogen | How was AutoGen evaluated? What metrics are used to measure performance? - Current version of AutoGen was evaluated on six applications to illustrate its potential in simplifying the development of high-performance multi-agent applications. These applications are selected based on their real-world relevance, problem d... |
GitHub | autogen | autogen/TRANSPARENCY_FAQS.md | autogen | What are the limitations of AutoGen? How can users minimize the impact of AutoGen’s limitations when using the system? AutoGen relies on existing LLMs. Experimenting with AutoGen would retain common limitations of large language models; including: - Data Biases: Large language models, trained on extensive data, can in... |
GitHub | autogen | autogen/TRANSPARENCY_FAQS.md | autogen | What operational factors and settings allow for effective and responsible use of AutoGen? - Code execution: AutoGen recommends using docker containers so that code execution can happen in a safer manner. Users can use function call instead of free-form code to execute pre-defined functions only. That helps increase the... |
GitHub | autogen | autogen/CODE_OF_CONDUCT.md | autogen | # Microsoft Open Source Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). Resources: - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) - [Microsoft Code of Conduct FAQ](https://opensource.mic... |
GitHub | autogen | autogen/samples/apps/auto-anny/README.md | autogen | <div align="center"> <img src="images/icon.png" alt="Repo Icon" width="100" height="100"> </div> # AutoAnny AutoAnny is a Discord bot built using AutoGen to help with AutoGen's Discord server. Actually Anny can help with any OSS GitHub project (set `ANNY_GH_REPO` below). |
GitHub | autogen | autogen/samples/apps/auto-anny/README.md | autogen | Features - **`/heyanny help`**: Lists commands. - **`/heyanny ghstatus`**: Summarizes GitHub activity. - **`/heyanny ghgrowth`**: Shows GitHub repo growth indicators. - **`/heyanny ghunattended`**: Lists unattended issues and PRs. |
GitHub | autogen | autogen/samples/apps/auto-anny/README.md | autogen | Installation 1. Clone the AutoGen repository and `cd samples/apps/auto-anny` 2. Install dependencies: `pip install -r requirements.txt` 3. Export Discord token and GitHub API token, ``` export OAI_CONFIG_LIST=your-autogen-config-list export DISCORD_TOKEN=your-bot-token export GH_TOKEN=your-gh-token ... |
GitHub | autogen | autogen/samples/apps/auto-anny/README.md | autogen | Roadmap - Enable access control - Enable a richer set of commands - Enrich agents with tool use |
GitHub | autogen | autogen/samples/apps/auto-anny/README.md | autogen | Contributing Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change. |
GitHub | autogen | autogen/samples/apps/cap/TODO.md | autogen | - ~~Pretty print debug_logs~~ - ~~colors~~ - ~~messages to oai should be condensed~~ - ~~remove orchestrator in scenario 4 and have the two actors talk to each other~~ - ~~pass a complex multi-part message~~ - ~~protobuf for messages~~ - ~~make changes to autogen to enable scenario 3 to work with CAN~~ - ~~make gro... |
GitHub | autogen | autogen/samples/apps/cap/README.md | autogen | # Composable Actor Platform (CAP) for AutoGen |
GitHub | autogen | autogen/samples/apps/cap/README.md | autogen | I just want to run the remote AutoGen agents! *Python Instructions (Windows, Linux, MacOS):* 0) cd py 1) pip install -r autogencap/requirements.txt 2) python ./demo/App.py 3) Choose (5) and follow instructions to run standalone Agents 4) Choose other options for other demos *Demo Notes:* 1) Options involving AutoGen ... |
GitHub | autogen | autogen/samples/apps/cap/README.md | autogen | What is Composable Actor Platform (CAP)? AutoGen is about Agents and Agent Orchestration. CAP extends AutoGen to allows Agents to communicate via a message bus. CAP, therefore, deals with the space between these components. CAP is a message based, actor platform that allows actors to be composed into arbitrary graph... |
GitHub | autogen | autogen/samples/apps/cap/py/README.md | autogen | # Composable Actor Platform (CAP) for AutoGen |
GitHub | autogen | autogen/samples/apps/cap/py/README.md | autogen | I just want to run the remote AutoGen agents! *Python Instructions (Windows, Linux, MacOS):* pip install autogencap 1) AutoGen require OAI_CONFIG_LIST. AutoGen python requirements: 3.8 <= python <= 3.11 ``` |
GitHub | autogen | autogen/samples/apps/cap/py/README.md | autogen | What is Composable Actor Platform (CAP)? AutoGen is about Agents and Agent Orchestration. CAP extends AutoGen to allows Agents to communicate via a message bus. CAP, therefore, deals with the space between these components. CAP is a message based, actor platform that allows actors to be composed into arbitrary graph... |
GitHub | autogen | autogen/samples/apps/cap/c++/Readme.md | autogen | Coming soon... |
GitHub | autogen | autogen/samples/apps/cap/node/Readme.md | autogen | Coming soon... |
GitHub | autogen | autogen/samples/apps/cap/c#/Readme.md | autogen | Coming soon... |
GitHub | autogen | autogen/samples/apps/autogen-studio/README.md | autogen | # AutoGen Studio [](https://badge.fury.io/py/autogenstudio) [](https://pepy.tech/project/autogenstudio)  AutoGen Studio is an AutoGen-powered AI app (user interf... |
GitHub | autogen | autogen/samples/apps/autogen-studio/README.md | autogen | Contribution Guide We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project: - Review the overall AutoGen project [contribution guide](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing) - Please review the AutoGen Studio [roadmap](https://git... |
GitHub | autogen | autogen/samples/apps/autogen-studio/README.md | autogen | FAQ Please refer to the AutoGen Studio [FAQs](https://microsoft.github.io/autogen/docs/autogen-studio/faqs) page for more information. |
GitHub | autogen | autogen/samples/apps/autogen-studio/README.md | autogen | Acknowledgements AutoGen Studio is Based on the [AutoGen](https://microsoft.github.io/autogen) project. It was adapted from a research prototype built in October 2023 (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang). |
GitHub | autogen | autogen/samples/apps/autogen-studio/frontend/README.md | autogen | ## 🚀 Running UI in Dev Mode Run the UI in dev mode (make changes and see them reflected in the browser with hotreloading): - npm install - npm run start This should start the server on port 8000. |
GitHub | autogen | autogen/samples/apps/autogen-studio/frontend/README.md | autogen | Design Elements - **Gatsby**: The app is created in Gatsby. A guide on bootstrapping a Gatsby app can be found here - https://www.gatsbyjs.com/docs/quick-start/. This provides an overview of the project file structure include functionality of files like `gatsby-config.js`, `gatsby-node.js`, `gatsby-browser.js` and `... |
GitHub | autogen | autogen/samples/apps/autogen-studio/frontend/README.md | autogen | Modifying the UI, Adding Pages The core of the app can be found in the `src` folder. To add pages, add a new folder in `src/pages` and add a `index.js` file. This will be the entry point for the page. For example to add a route in the app like `/about`, add a folder `about` in `src/pages` and add a `index.tsx` file. Y... |
GitHub | autogen | autogen/samples/apps/autogen-studio/frontend/README.md | autogen | connecting to front end the front end makes request to the backend api and expects it at /api on localhost port 8081 |
GitHub | autogen | autogen/samples/apps/autogen-studio/frontend/README.md | autogen | setting env variables for the UI - please look at `.env.default` - make a copy of this file and name it `.env.development` - set the values for the variables in this file - The main variable here is `GATSBY_API_URL` which should be set to `http://localhost:8081/api` for local development. This tells the UI where to ... |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | # What is Promptflow Promptflow is a comprehensive suite of tools that simplifies the development, testing, evaluation, and deployment of LLM based AI applications. It also supports integration with Azure AI for cloud-based operations and is designed to streamline end-to-end development. Refer to [Promptflow docs](ht... |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Getting Started - Install required python packages ```bash cd samples/apps/promptflow-autogen pip install -r requirements.txt ``` - This example assumes a working Redis cache service to be available. You can get started locally using this [guide](https://redis.io/docs/latest/operate/oss_and_stack/install/ins... |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Chat flow Chat flow is designed for conversational application development, building upon the capabilities of standard flow and providing enhanced support for chat inputs/outputs and chat history management. With chat flow, you can easily create a chatbot that handles chat input and output. |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Create connection for LLM tool to use You can follow these steps to create a connection required by a LLM tool. Currently, there are two connection types supported by LLM tool: "AzureOpenAI" and "OpenAI". If you want to use "AzureOpenAI" connection type, you need to create an Azure OpenAI service first. Please refer ... |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Develop a chat flow The most important elements that differentiate a chat flow from a standard flow are **Chat Input**, **Chat History**, and **Chat Output**. - **Chat Input**: Chat input refers to the messages or queries submitted by users to the chatbot. Effectively handling chat input is crucial for a successful c... |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Interact with chat flow Promptflow supports interacting via vscode or via Promptflow CLI provides a way to start an interactive chat session for chat flow. Customer can use below command to start an interactive chat session: ```bash pf flow test --flow <flow_folder> --interactive ``` |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Autogen State Flow [Autogen State Flow](./autogen_stateflow.py) contains stateflow example shared at [StateFlow](https://microsoft.github.io/autogen/blog/2024/02/29/StateFlow/) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or take further actions. Output o... |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Agent Nested Chat [Autogen Nested Chat](./agentchat_nestedchat.py) contains Scenario 1 of nested chat example shared at [Nested Chats](https://microsoft.github.io/autogen/docs/notebooks/agentchat_nestedchat) with Promptflow. All the interim messages are sent to Redis channel. You can use these to stream to frontend or... |
GitHub | autogen | autogen/samples/apps/promptflow-autogen/README.md | autogen | Redis for Data cache and Interim Messages Autogen supports Redis for [data caching](https://microsoft.github.io/autogen/docs/reference/cache/redis_cache/) and since redis supports a pub-subs model as well, this Promptflow example is configured for all agent callbacks to send messages to a Redis channel. This is option... |
GitHub | autogen | autogen/samples/apps/websockets/README.md | autogen | # Using websockets with FastAPI and AutoGen |
GitHub | autogen | autogen/samples/apps/websockets/README.md | autogen | Running the example 1. Navigate to the directory containing the example: ``` cd samples/apps/websockets ``` 2. Install the necessary dependencies: ``` ./setup.py ``` 3. Run the application: ``` uvicorn application:app --reload ``` You should now be able to access the application ... |
GitHub | autogen | autogen/samples/tools/finetuning/README.md | autogen | # Tools for fine-tuning the local models that power agents This directory aims to contain tools for fine-tuning the local models that power agents. |
GitHub | autogen | autogen/samples/tools/finetuning/README.md | autogen | Fine tune a custom model client AutoGen supports the use of custom models to power agents [see blog post here](https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models). This directory contains a tool to provide feedback to that model, that can be used to fine-tune the model. The creator of the Custom Model ... |
GitHub | autogen | autogen/samples/tools/webarena/README.md | autogen | # WebArena Benchmark This directory helps run AutoGen agents on the [WebArena](https://arxiv.org/pdf/2307.13854.pdf) benchmark. |
GitHub | autogen | autogen/samples/tools/webarena/README.md | autogen | Installing WebArena WebArena can be installed by following the instructions from [WebArena's GitHub repository](git@github.com:web-arena-x/webarena.git) If using WebArena with AutoGen there is a clash on the versions of OpenAI and some code changes are needed in WebArena to be compatible with AutoGen's OpenAI version... |
GitHub | autogen | autogen/samples/tools/webarena/README.md | autogen | Running with AutoGen agents You can use the `run.py` file in the `webarena` directory to run WebArena with AutoGen. The OpenAI (or AzureOpenAI or other model) configuration can be setup via `OAI_CONFIG_LIST`. The config list will be filtered by whatever model is passed in the `--model` argument. e.g. of running `run.... |
GitHub | autogen | autogen/samples/tools/webarena/README.md | autogen | References **WebArena: A Realistic Web Environment for Building Autonomous Agents**<br/> Zhou, Shuyan and Xu, Frank F and Zhu, Hao and Zhou, Xuhui and Lo, Robert and Sridhar, Abishek and Cheng, Xianyi and Bisk, Yonatan and Fried, Daniel and Alon, Uri and others<br/> [https://arxiv.org/pdf/2307.13854.pdf](https://arxiv.... |
GitHub | autogen | autogen/samples/tools/autogenbench/CONTRIBUTING.md | autogen | # Contributing to AutoGenBench As part of the broader AutoGen project, AutoGenBench welcomes community contributions. Contributions are subject to AutoGen's [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute), as well as a few additional AutoGenBench-specific requirements outlined here. You ... |
GitHub | autogen | autogen/samples/tools/autogenbench/CONTRIBUTING.md | autogen | General Contribution Requirements We ask that all contributions to AutoGenBench adhere to the following: - Follow AutoGen's broader [contribution guidelines](https://microsoft.github.io/autogen/docs/Contribute) - All AutoGenBench benchmarks should live in a subfolder of `/samples/tools/autogenbench/scenarios` alongsid... |
GitHub | autogen | autogen/samples/tools/autogenbench/CONTRIBUTING.md | autogen | Implementing and Running Benchmark Tasks At the core of any benchmark is a set of tasks. To implement tasks that are runnable by AutoGenBench, you must adhere to AutoGenBench's templating and scenario expansion algorithms, as outlined below. ### Task Definitions All tasks are stored in JSONL files (in subdirectories ... |
GitHub | autogen | autogen/samples/tools/autogenbench/CONTRIBUTING.md | autogen | Task Instance Expansion Algorithm Once the tasks have been defined, as per above, they must be "instantiated" before they can be run. This instantiation happens automatically when the user issues the `autogenbench run` command and involves creating a local folder to share with Docker. Each instance and repetition gets... |
GitHub | autogen | autogen/samples/tools/autogenbench/CONTRIBUTING.md | autogen | Scenario Execution Algorithm Once the task has been instantiated it is run (via run.sh). This script will execute the following steps: 1. If a file named `global_init.sh` is present, run it. 2. If a file named `scenario_init.sh` is present, run it. 3. Install the requirements.txt file (if running in Docker) 4. Run th... |
GitHub | autogen | autogen/samples/tools/autogenbench/CONTRIBUTING.md | autogen | Integrating with the `tabulate` and `clone` commands. The above details are sufficient for defining and running tasks, but if you wish to support the `autogenbench tabulate` and `autogenbench clone` commands, a few additional steps are required. ### Tabulations If you wish to leverage the default tabulation logic, i... |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | # AutoGenBench AutoGenBench is a tool for repeatedly running a set of pre-defined AutoGen tasks in a setting with tightly-controlled initial conditions. With each run, AutoGenBench will start from a blank slate. The agents being evaluated will need to work out what code needs to be written, and what libraries or depen... |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | Technical Specifications If you are already an AutoGenBench pro, and want the full technical specifications, please review the [contributor's guide](CONTRIBUTING.md). |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | Docker Requirement AutoGenBench also requires Docker (Desktop or Engine). **It will not run in GitHub codespaces**, unless you opt for native execution (with is strongly discouraged). To install Docker Desktop see [https://www.docker.com/products/docker-desktop/](https://www.docker.com/products/docker-desktop/). |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | Installation and Setup **To get the most out of AutoGenBench, the `autogenbench` package should be installed**. At present, the easiest way to do this is to install it via `pip`: ``` pip install autogenbench ``` If you would prefer working from source code (e.g., for development, or to utilize an alternate branch), ... |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | A Typical Session Once AutoGenBench and necessary keys are installed, a typical session will look as follows: ``` autogenbench clone HumanEval cd HumanEval autogenbench run Tasks/r_human_eval_two_agents.jsonl autogenbench tabulate results/r_human_eval_two_agents ``` Where: - `autogenbench clone HumanEval` downloads a... |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | Cloning Benchmarks To clone an existing benchmark, simply run: ``` autogenbench clone [BENCHMARK] ``` For example, ``` autogenbench clone HumanEval ``` To see which existing benchmarks are available to clone, run: ``` autogenbench clone --list ``` |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | Running AutoGenBench To run a benchmark (which executes the tasks, but does not compute metrics), simply execute: ``` cd [BENCHMARK] autogenbench run Tasks ``` For example, ``` cd HumanEval autogenbench run Tasks ``` The default is to run each task once. To run each scenario 10 times, use: ``` autogenbench run --re... |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | Results By default, the AutoGenBench stores results in a folder hierarchy with the following template: ``./results/[scenario]/[task_id]/[instance_id]`` For example, consider the following folders: ``./results/default_two_agents/two_agent_stocks/0`` ``./results/default_two_agents/two_agent_stocks/1`` ... ``./resul... |
GitHub | autogen | autogen/samples/tools/autogenbench/README.md | autogen | Contributing or Defining New Tasks or Benchmarks If you would like to develop -- or even contribute -- your own tasks or benchmarks, please review the [contributor's guide](CONTRIBUTING.md) for complete technical details. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/MATH/README.md | autogen | # MATH Benchmark This scenario implements the [MATH](https://arxiv.org/abs/2103.03874) benchmark. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/MATH/README.md | autogen | Running the tasks ``` autogenbench run Tasks/math_two_agents.jsonl autogenbench tabulate Results/math_two_agents ``` By default, only a small subset (17 of 5000) MATH problems are exposed. Edit `Scripts/init_tasks.py` to expose more tasks. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/MATH/README.md | autogen | Note on automated evaluation In this scenario, we adopted an automated evaluation pipeline (from [AutoGen](https://arxiv.org/abs/2308.08155) evaluation) that uses LLM to compare the results. Thus, the metric above is only an estimation of the agent's performance on math problems. We also find a similar practice of usin... |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/MATH/README.md | autogen | References **Measuring Mathematical Problem Solving With the MATH Dataset**<br/> Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, Jacob Steinhardt<br/> [https://arxiv.org/abs/2103.03874](https://arxiv.org/abs/2103.03874) **AutoGen: Enabling Next-Gen LLM Applications via Mu... |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/Examples/README.md | autogen | # Example Tasks Various AutoGen example tasks. Unlike other benchmark tasks, these tasks have no automated evaluation. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/Examples/README.md | autogen | Running the tasks ``` autogenbench run Tasks/default_two_agents ``` Some tasks require a Bing API key. Edit the ENV.json file to provide a valid BING_API_KEY, or simply allow that task to fail (it is only required by one task). |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/AutoGPT/README.md | autogen | # AutoGPT Benchmark This scenario implements an older subset of the [AutoGPT](https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/tree/master/agbenchmark#readme) benchmark. Tasks were selected in November 2023, and may have since been deprecated. They are nonetheless useful for comparison and development. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/AutoGPT/README.md | autogen | Running the tasks ``` autogenbench run Tasks/autogpt__two_agents.jsonl autogenbench tabulate Results/autogpt__two_agents ``` |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/GAIA/README.md | autogen | # GAIA Benchmark This scenario implements the [GAIA](https://arxiv.org/abs/2311.12983) agent benchmark. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/GAIA/README.md | autogen | Running the TwoAgents tasks Level 1 tasks: ```sh autogenbench run Tasks/gaia_test_level_1__two_agents.jsonl autogenbench tabulate Results/gaia_test_level_1__two_agents ``` Level 2 and 3 tasks are executed similarly. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/GAIA/README.md | autogen | Running the SocietyOfMind tasks Running the SocietyOfMind tasks is similar to the TwoAgentTasks, but requires an `ENV.json` file with a working BING API key. This file should be located in the root current working directory from where you are running autogenbench, and should have at least the following contents: ```j... |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/GAIA/README.md | autogen | References **GAIA: a benchmark for General AI Assistants**<br/> Grégoire Mialon, Clémentine Fourrier, Craig Swift, Thomas Wolf, Yann LeCun, Thomas Scialom<br/> [https://arxiv.org/abs/2311.12983](https://arxiv.org/abs/2311.12983) |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md | autogen | # HumanEval Benchmark This scenario implements a modified version of the [HumanEval](https://arxiv.org/abs/2107.03374) benchmark. Compared to the original benchmark, there are **two key differences** here: - A chat model rather than a completion model is used. - The agents get pass/fail feedback about their implement... |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md | autogen | Running the tasks ``` autogenbench run Tasks/human_eval_two_agents.jsonl autogenbench tabulate Results/human_eval_two_agents ``` For faster development and iteration, a reduced HumanEval set is available via `Tasks/r_human_eval_two_agents.jsonl`, and contains only 26 problems of varying difficulty. |
GitHub | autogen | autogen/samples/tools/autogenbench/scenarios/HumanEval/README.md | autogen | References **Evaluating Large Language Models Trained on Code**<br/> Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mis... |
GitHub | autogen | autogen/autogen/agentchat/contrib/agent_eval/README.md | autogen | Agents for running the [AgentEval](https://microsoft.github.io/autogen/blog/2023/11/20/AgentEval/) pipeline. AgentEval is a process for evaluating a LLM-based system's performance on a given task. When given a task to evaluate and a few example runs, the critic and subcritic agents create evaluation criteria for eval... |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | <!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | Why are these changes needed? <!-- Please give a short summary of the change and the problem this solves. --> |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | Related issue number <!-- For example: "Closes #1234" --> |
GitHub | autogen | autogen/.github/PULL_REQUEST_TEMPLATE.md | autogen | Checks - [ ] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally. - [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [ ] I've made sure all au... |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 9