-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Domain-Specific Large Model Benchmarking Based on KubeEdge-Ianvs #95
Comments
If anyone has questions regarding this issue, please feel free to leave a message here. We would also appreciate it if new members could introduce themselves to the community. |
I have a question about the Benchmark Dataset Map: which domains should this dataset cover? Is it for all domains, or just industrial and government sectors? |
|
/assign |
What would you like to be added/modified:
Based on existing datasets, the issue aims to build a benchmark for domain-specific large models on KubeEdge-Ianvs. Namely, it aims to help all Edge AI application developers validate and select the best-matched domain-specific large models. This issue includes:
Why is this needed:
As large models enter the era of scaled applications, the cloud has already provided infrastructure and services for these large models. Relevant customers have further proposed targeted application requirements on the edge side, including personalization, data compliance, and real-time capabilities, making AI services with cloud-edge collaboration a major trend. However, there are currently two major challenges in terms of product definition, service quality, service qualifications, and industry influence: general competitiveness and customer trust problems. The crux of the matter is that the current large model benchmarking focuses on assessing general basic capabilities and fails to drive large model applications from an industry or domain-specific perspective.
This issue reflects the real value of large models through industry applications from the perspectives of the domain-specific large model and cloud-edge collaborative AI, using industry benchmarks to drive the incubation of large model applications. Based on the collaborative AI benchmark test suite KubeEdge-Ianvs, this issue supplements the large model testing tool interface, provides matching test datasets, and constructs large model test suites for specific domains, e.g., for governments.
Recommended Skills:
KubeEdge-Ianvs, Python, LLMs
Useful links:
Introduction to Ianvs
Quick Start
How to test algorithms with Ianvs
Testing incremental learning in industrial defect detection
Benchmarking for embodied AI
KubeEdge-Ianvs
Example LLMs Benchmark List
Ianvs v0.1 documentation
(中国)国家标准计划《人工智能 预训练模型 第2部分:评测指标与方法》及政务大模型、工业大模型等标准化文件
The text was updated successfully, but these errors were encountered: