You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What would you like to be added/modified:
Based on the current multiedge inference benchmark on ianvs, we would like to extend the multiedge inference on multiple heterogeneous edges (e.g., mobile phones, smart watches, laptops) for reducing the inference latency of a large DNN model on high mobility scenarios, where the connection between cloud and edge is unreliable. To achieve this goal, it includes:
bulid a benchmark for multiple edge inference in ianvs;
implement some basic algorithm for DNN partitioning on multiple ends;
(Optional) develop a baseline algorithm for this benchmark;
Why is this needed:
In recent years, artificial intelligence models represented by LLM have put forward extremely high requirements for computing power. However, in high mobility scenarios, the connection between edges and clouds is unstable, making it difficult to ensure the quality of service. This results in extremely poor user experience for applications such as large models in high mobility scenarios.
However, in fact, computing power on the edge is not weak either. At present, more and more mobile phones, tablets, laptops, etc. are equipped with AI chips, allowing them to run neural networks locally with a high latency. We therefore propose whether it is possible to utilize the computing power of multiple edge devices in one person's hands to reduce model inference latency and ensure service quality.
KubeEdge provides excellent collaborative foundational capabilities and provides examples of multilateral collaboration. Therefore, we plan to extend the multilateral collaboration to multiple heterogeneous edges based on this example.
Hi @yunzhe99 , according to your description and reference link, ianvs has currently implemented a simulation benchmark for multiedge inference, but it requires users to manually perform computational graph partitioning. This task is to implement multiedge inference between heterogeneous devices on this basis. The process involves addressing how to automatically partition the computational graph and schedule appropriate subgraphs or operators to suitable heterogeneous devices. This is a very interesting challenge, and I am willing to undertake it.
If anyone has questions regarding this issue, please feel free to leave a message here. We would also appreciate it if new members could introduce themselves to the community.
What would you like to be added/modified:
Based on the current multiedge inference benchmark on ianvs, we would like to extend the multiedge inference on multiple heterogeneous edges (e.g., mobile phones, smart watches, laptops) for reducing the inference latency of a large DNN model on high mobility scenarios, where the connection between cloud and edge is unreliable. To achieve this goal, it includes:
Why is this needed:
In recent years, artificial intelligence models represented by LLM have put forward extremely high requirements for computing power. However, in high mobility scenarios, the connection between edges and clouds is unstable, making it difficult to ensure the quality of service. This results in extremely poor user experience for applications such as large models in high mobility scenarios.
However, in fact, computing power on the edge is not weak either. At present, more and more mobile phones, tablets, laptops, etc. are equipped with AI chips, allowing them to run neural networks locally with a high latency. We therefore propose whether it is possible to utilize the computing power of multiple edge devices in one person's hands to reduce model inference latency and ensure service quality.
KubeEdge provides excellent collaborative foundational capabilities and provides examples of multilateral collaboration. Therefore, we plan to extend the multilateral collaboration to multiple heterogeneous edges based on this example.
Recommended Skills:
Python, KubeEdge-Ianvs
Useful link:
https://github.com/kubeedge/ianvs/tree/main/examples/MOT17/multiedge_inference_bench/pedestrian_tracking
The text was updated successfully, but these errors were encountered: