Hacker News
- Show HN: I implemented evals metrics for LLMs that runs locally on your machine https://github.com/confident-ai/deepeval 3 comments
- Show HN: DeepEval – Evaluation and Unit Testing for LLMs https://github.com/confident-ai/deepeval 8 comments
- DeepEval: The Open-Source LLM Evaluation Framework https://github.com/confident-ai/deepeval 6 comments python
- [D] Referenceless NLP Evaluation https://github.com/confident-ai/deepeval 2 comments machinelearning
- [P] DeepEval - Neural Framework For Testing LLMs https://github.com/confident-ai/deepeval 2 comments machinelearning
Linking pages
- GitHub - mlabonne/llm-course: Course to get into Large Language Models (LLMs) with roadmaps and Colab notebooks. https://github.com/mlabonne/llm-course 10 comments
- LLMs.HowTo | An Overview on Testing Frameworks For LLMs https://llmshowto.com/blog/llm-test-frameworks 3 comments
- Optimizing LLMs: Tools and Techniques for Peak Performance Testing - Semaphore https://semaphoreci.com/blog/llms-performance-testing 0 comments
- GitHub - alopatenko/LLMEvaluation: A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use cases, promote the adoption of best practices in LLM assessment, and critically assess the effectiveness of these evaluation methods. https://github.com/alopatenko/LLMEvaluation 0 comments
Linked pages
Related searches:
Search whole site: site:github.com
Search title: GitHub - confident-ai/deepeval: The LLM Evaluation Framework
See how to search.