Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On Windows, _make_subprocess_transport raise NotImplementedError when running MagenticOneGroupChat #5069

Open
zxh9813 opened this issue Jan 16, 2025 · 11 comments
Milestone

Comments

@zxh9813
Copy link

zxh9813 commented Jan 16, 2025

What happened?

Can help on below issue? I just copy and paste example but got below error

File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\asyncio\base_events.py", line 498, in _make_subprocess_transport
raise NotImplementedError
NotImplementedError

What did you expect to happen?

There should be no error, right?

How can we reproduce it (as minimally and precisely as possible)?

run below code in jupyter notebook

import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.teams import MagenticOneGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.agents.web_surfer import MultimodalWebSurfer

model_client=OpenAIChatCompletionClient(
    model="llama3.2",
    base_url="http://localhost:11434/v1",
    api_key="ollama",
    model_capabilities={
        "vision": True,
        "function_calling": True,
        "json_output": True,
    }
)

async def main() -> None:
    # model_client = OpenAIChatCompletionClient()

    surfer = MultimodalWebSurfer(
        "WebSurfer",
        model_client=model_client,
    )
    team = MagenticOneGroupChat([surfer], model_client=model_client)
    await Console(team.run_stream(task="What is the UV index in Melbourne today?"))


await main()

AutoGen version

0.4

Which package was this bug in

Core

Model used

llama3.2

Python version

3.10.16

Operating system

windows

Any additional info you think would be helpful for fixing this bug

No response

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

@zxh9813

It looks like a windows platform issue. Could you please post your full output stack trace. Please use

```
```

to format your output and code.

@ekzhu ekzhu changed the title NotImplementedError On Windows, _make_subprocess_transport raise NotImplementedError when running MagenticOneGroupChat Jan 16, 2025
@zxh9813
Copy link
Author

zxh9813 commented Jan 16, 2025

---------- user ----------
What is the UV index in Melbourne today?
---------- MagenticOneOrchestrator ----------

We are working to address the following user request:

What is the UV index in Melbourne today?


To answer this request we have assembled the following team:

WebSurfer: 
    A helpful assistant with access to a web browser.
    Ask them to perform web searches, open pages, and interact with content (e.g., clicking links, scrolling the viewport, etc., filling in form fields, etc.).
    It can also summarize the entire page, or answer questions based on the content of the page.
    It can also be asked to sleep and wait for pages to load, in cases where the pages seem to be taking a while to load.


Here is an initial fact sheet to consider:

1. GIVEN OR VERIFIED FACTS: None of specific facts are given about Melbourne, except for the fact that it's a city located in Victoria, Australia.

2. FACTS TO LOOK UP: 
    * Current UV index for Melbourne (likely required)
    * Melbourne's geographical location or time zone
...
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\asyncio\base_events.py", line 498, in _make_subprocess_transport
    raise NotImplementedError
NotImplementedError

Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?bd9552e6-0116-4570-8d28-174159d98a67) or open in a [text editor](command:workbench.action.openLargeOutput?bd9552e6-0116-4570-8d28-174159d98a67). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
Error processing publish message for group_chat_manager/a7a0d7a9-6be8-4855-9510-9a6984b13ed6
Traceback (most recent call last):
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 409, in _on_message
    return await agent.on_message(
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_sequential_routed_agent.py", line 48, in on_message_impl
    return await super().on_message_impl(message, ctx)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_routed_agent.py", line 485, in on_message_impl
    return await h(self, message, ctx)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_routed_agent.py", line 268, in wrapper
    return_value = await func(self, message, ctx)  # type: ignore
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_magentic_one\_magentic_one_orchestrator.py", line 187, in handle_agent_response
    await self._orchestrate_step(ctx.cancellation_token)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_magentic_one\_magentic_one_orchestrator.py", line 368, in _orchestrate_step
    raise ValueError(
ValueError: Invalid next speaker: Human Analyst from the ledger, participants are: ['WebSurfer']
---------- MagenticOneOrchestrator ----------
Please analyze the given information and suggest an alternative approach using current and accurate data.

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

Looks like there are two errors:

  1. Invalid next speak: this seems like a bug. You only have one agent in the group chat right? Cc. @afourney.
  2. NotImplementedError. It's unclear what this is

@zxh9813
Copy link
Author

zxh9813 commented Jan 16, 2025

I just copy the code from https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/magentic-one.html
and changed model_client connecting to Ollama with llama3.2

Below is the code in that web page:

import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.teams import MagenticOneGroupChat
from autogen_agentchat.ui import Console
from autogen_ext.agents.web_surfer import MultimodalWebSurfer


async def main() -> None:
    model_client = OpenAIChatCompletionClient(model="gpt-4o")

    surfer = MultimodalWebSurfer(
        "WebSurfer",
        model_client=model_client,
    )
    team = MagenticOneGroupChat([surfer], model_client=model_client)
    await Console(team.run_stream(task="What is the UV index in Melbourne today?"))


asyncio.run(main())

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

For formatting code please use ` not '.

Does llama3.2 support multimodal? This is needed for websurfer

@zxh9813
Copy link
Author

zxh9813 commented Jan 16, 2025

Asking copilot, yes, it supports multimodal.
Below is the answer:

Yes, Llama 3.2 does support multimodal capabilities. Meta's Llama 3.2 introduces models that can process and understand both text and images. Specifically, the 11B and 90B parameter models are designed for vision-enabled tasks, such as image recognition, document understanding, and image captioning12.

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

https://ollama.com/blog/llama3.2

Are you using the 11B and 90B models?

@zxh9813
Copy link
Author

zxh9813 commented Jan 16, 2025

Yeah, just tried LLama3.2-vision which is 11B model, still have "NotImplementedError"

---------- user ----------
What is the UV index in Melbourne today?
---------- MagenticOneOrchestrator ----------

We are working to address the following user request:

What is the UV index in Melbourne today?


To answer this request we have assembled the following team:

WebSurfer: 
    A helpful assistant with access to a web browser.
    Ask them to perform web searches, open pages, and interact with content (e.g., clicking links, scrolling the viewport, etc., filling in form fields, etc.).
    It can also summarize the entire page, or answer questions based on the content of the page.
    It can also be asked to sleep and wait for pages to load, in cases where the pages seem to be taking a while to load.


Here is an initial fact sheet to consider:

**1. GIVEN OR VERIFIED FACTS**

* Melbourne (city's name)
* Today's date (implied, as a specific request for the UV index requires current date)
...
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\asyncio\base_events.py", line 498, in _make_subprocess_transport
    raise NotImplementedError
NotImplementedError

Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?5a551d1b-ecf3-44e7-ab1d-996bffd616d8) or open in a [text editor](command:workbench.action.openLargeOutput?5a551d1b-ecf3-44e7-ab1d-996bffd616d8). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
Error processing publish message for group_chat_manager/0e540a3c-19ff-483c-af6f-dae0d50c9291
Traceback (most recent call last):
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_single_threaded_agent_runtime.py", line 409, in _on_message
    return await agent.on_message(
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_base_agent.py", line 113, in on_message
    return await self.on_message_impl(message, ctx)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_sequential_routed_agent.py", line 48, in on_message_impl
    return await super().on_message_impl(message, ctx)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_routed_agent.py", line 485, in on_message_impl
    return await h(self, message, ctx)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_core\_routed_agent.py", line 268, in wrapper
    return_value = await func(self, message, ctx)  # type: ignore
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_magentic_one\_magentic_one_orchestrator.py", line 187, in handle_agent_response
    await self._orchestrate_step(ctx.cancellation_token)
  File "c:\Users\c8b4bd\AppData\Local\miniforge3\envs\autogen\lib\site-packages\autogen_agentchat\teams\_group_chat\_magentic_one\_magentic_one_orchestrator.py", line 368, in _orchestrate_step
    raise ValueError(
ValueError: Invalid next speaker: n/a from the ledger, participants are: ['WebSurfer']
---------- MagenticOneOrchestrator ----------
What steps can be taken to repair WebSurfer and proceed?

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 16, 2025

Thanks. I think it's a bug. Cc @afourney @gagb @husseinmozannar

For the speaker selection bug. We need to make sure keep selecting the same speaker if there is only one of them.

@ekzhu ekzhu added this to the 0.4.3 milestone Jan 16, 2025
@afourney
Copy link
Member

afourney commented Jan 16, 2025

Looks like there are two errors:

  1. Invalid next speak: this seems like a bug. You only have one agent in the group chat right? Cc. @afourney.
  2. NotImplementedError. It's unclear what this is

Invalid next speaker happens when the orchestrator's inner loop progress ledger receives an invalid completion from the model. When this happens I believe it retries a few times before giving up (I need to check that though -- this detail may not have been ported to AgentChat). Anyhow, we've not done extensive testing with ollama endpoints, and the task is generally pretty tough for weaker models. I suspect this is why that aspect is failing.

In this example though, there's only one agent. It's unclear if the M1 Orchestrator is necessary (vs. round robin), but I suppose it benefits from the plan, explicit instruction, and termination check. I will prepare a patch to make speaker selection deterministic in this case.

@afourney
Copy link
Member

I've created a PR to mitigate this issue. Could you let me know if this address the problem for you?#5079

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants