- Are popular toxicity models simply profanity detectors? https://www.surgehq.ai/blog/are-popular-toxicity-models-simply-profanity-detectors 211 comments
- Holy $#!t: Are popular toxicity models simply profanity detectors? [D] https://www.surgehq.ai/blog/are-popular-toxicity-models-simply-profanity-detectors 84 comments machinelearning
- Holy $#!t: Are popular toxicity models simply profanity detectors? [OC] https://www.surgehq.ai/blog/are-popular-toxicity-models-simply-profanity-detectors 3 comments deeplearning
- 30% of Google's Emotions Dataset is Mislabeled https://www.surgehq.ai/blog/30-percent-of-googles-reddit-emotions-dataset-is-mislabeled 280 comments
- Why Instagram is Losing Gen Z: We Asked 100 Users to Compare TikTok vs. Reels https://www.surgehq.ai/blog/tiktok-vs-instagram-reels-personalized-human-evaluation 263 comments
- Building a No-Code Machine Learning Model by Chatting with GitHub Copilot https://www.surgehq.ai/blog/building-a-no-code-toxicity-classifier-by-talking-to-copilot 144 comments
- We asked 100 humans to draw the DALLÂ·E prompts https://www.surgehq.ai/blog/humans-vs-dall-e 97 comments
- The average number of ads on a Google Search recipe? 8.7 https://www.surgehq.ai/blog/the-average-number-of-ads-on-a-google-search-recipe-8-7 3 comments
- The $250K Inverse Scaling Prize and Human-AI Alignment https://www.surgehq.ai/blog/the-250k-inverse-scaling-prize-and-human-ai-alignment 0 comments
- Stock Sentiment Analysis Dataset of Social Media Conversations https://www.surgehq.ai/blog/sentiment-analysis-dataset-of-social-media-conversations 0 comments
- How Surge AI Built OpenAI's GSM8K Dataset of 8,500 Math Problems https://www.surgehq.ai/blog/how-we-built-it-openais-gsm8k-dataset-of-8500-math-problems 0 comments
- 25 Examples of Twitter's Egregious Content Moderation Failures https://www.surgehq.ai/blog/25-examples-of-twitters-content-moderation-failures 0 comments
Would you like to stay up to date with Computer science? Checkout Computer science Weekly.
Search whole site: site:www.surgehq.ai
Search title: Holy $#!t: Are popular toxicity models simply profanity detectors?
See how to search.