Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Domain-Specific Large Model Benchmarking Based on KubeEdge-Ianvs #95

Open
MooreZheng opened this issue May 7, 2024 · 4 comments
Open
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@MooreZheng
Copy link
Collaborator

MooreZheng commented May 7, 2024

What would you like to be added/modified:
Based on existing datasets, the issue aims to build a benchmark for domain-specific large models on KubeEdge-Ianvs. Namely, it aims to help all Edge AI application developers validate and select the best-matched domain-specific large models. This issue includes:

  1. Benchmark Dataset Map: A mapping document, e.g., a table, includes test datasets and their download method for various specific domains.
  2. Large Model Interfaces: Integrates open-source benchmarking projects like OpenCompass. Provides model API addresses and keys for online large model invocation.
  3. Domain-specific Large Model Benchmark: Focuses on NLP or multimodal tasks. Constructs a suite for the government sector, including test datasets, evaluation metrics, testing environments, and usage guidelines.
  4. (Advanced) Industrial/Medical Large Model Benchmark: Includes metrics and examples.
  5. (Advanced) Efficient Evaluation: Enables concurrent execution of tasks with automatic request and result collection.
  6. (Advanced) Task Execution and Monitoring: Visualizes the large model invocation process.

Why is this needed:
As large models enter the era of scaled applications, the cloud has already provided infrastructure and services for these large models. Relevant customers have further proposed targeted application requirements on the edge side, including personalization, data compliance, and real-time capabilities, making AI services with cloud-edge collaboration a major trend. However, there are currently two major challenges in terms of product definition, service quality, service qualifications, and industry influence: general competitiveness and customer trust problems. The crux of the matter is that the current large model benchmarking focuses on assessing general basic capabilities and fails to drive large model applications from an industry or domain-specific perspective.

This issue reflects the real value of large models through industry applications from the perspectives of the domain-specific large model and cloud-edge collaborative AI, using industry benchmarks to drive the incubation of large model applications. Based on the collaborative AI benchmark test suite KubeEdge-Ianvs, this issue supplements the large model testing tool interface, provides matching test datasets, and constructs large model test suites for specific domains, e.g., for governments.

Recommended Skills:
KubeEdge-Ianvs, Python, LLMs

Useful links:
Introduction to Ianvs
Quick Start
How to test algorithms with Ianvs
Testing incremental learning in industrial defect detection
Benchmarking for embodied AI
KubeEdge-Ianvs
Example LLMs Benchmark List
Ianvs v0.1 documentation
(中国)国家标准计划《人工智能 预训练模型 第2部分:评测指标与方法》及政务大模型、工业大模型等标准化文件

@MooreZheng MooreZheng changed the title Domain-specific Large Model Benchmarking Based on KubeEdge-Ianvs Domain-Specific Large Model Benchmarking Based on KubeEdge-Ianvs May 7, 2024
@MooreZheng
Copy link
Collaborator Author

MooreZheng commented May 9, 2024

If anyone has questions regarding this issue, please feel free to leave a message here. We would also appreciate it if new members could introduce themselves to the community.

@IcyFeather233
Copy link
Contributor

I have a question about the Benchmark Dataset Map: which domains should this dataset cover? Is it for all domains, or just industrial and government sectors?
Also, if I need to submit a preliminary version, where would be the most appropriate directory to submit it?

@MooreZheng
Copy link
Collaborator Author

I have a question about the Benchmark Dataset Map: which domains should this dataset cover? Is it for all domains, or just industrial and government sectors? Also, if I need to submit a preliminary version, where would be the most appropriate directory to submit it?

  1. For this issue, preferred domains would be those currently making great impacts in LLM, e.g., government affairs, industry, and medical domains.
  2. It depends on what is included in the submitted version. In the beginning, a proposal would be preferred.

@IcyFeather233
Copy link
Contributor

/assign

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

2 participants