Answer:
For enterprise clients, Wrench’s AI reporting tracks the health and performance of your custom AI by providing a score for precision and a grade on quality. For the more technical, the report displays the output of hold-out evaluations to ensure your models are accurate, precise, and retrained as new and additional data comes on board to ensure they stay trained.
What is the hold-out method? It’s a way to train machine learning models by dividing data into separate sets: one for training the model and others for validation and testing, ensuring the model’s effectiveness on new data.
Track your AI performance over time
We use charts to visually capture model training and evolution over time, including your score averages and their distributions. This allows us to quickly highlight issues that could impact the score health of your model over time. This allows you to see how the model is evolving and becoming more accurate so you can trust its predictions.
Easy to understand model grading — no data scientist required
We make it easy to interpret the health of your model by using a simple grading system (”A” through “F”) and an explanation of how to interpret data quality and model performance. There’s no need to have a data science background.
The report allows you to switch between your models’ performance and predictions. Check your grade over different times to see which factors have positively or negatively impacted quality and performance.
Related Content
Still Need Help?
💡 If you encounter any roadblocks, please email support@wrench.ai, and we’ll get back to you promptly.
We have standing office hours every Thursday at 11 a.m. Pacific / 2 p.m. Eastern. You can also schedule a time with a team member for any other day/time by clicking here.