Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
BP: WE NEED TO FLOOD THE COUNTRY W/ ILLEGALS FOR JOB SHORTAGES
ALSO BP: AI IS GO…
ytc_UgzPtVzpr…
G
You want lower prices, right? Driverless transportation would be a big cut in th…
ytc_UgxBI2zg8…
G
I love the little Overwatch reference. And i think if it comes so far that we ge…
ytc_Uginb6UDL…
G
AI based facial recognition is inherently racist because of the inputs. If it’s …
rdc_h54254u
G
AI can destroy the infrastructure that supports the people that have the skills …
ytc_UgwHwQ-M9…
G
@Johnnyharris I am pretty severely disappointed in how you continue to not cover…
ytc_Ugw6s-Mtq…
G
This will definitely be an unpopular opinion, but there are a lot of really inte…
ytc_UgxLCagst…
G
I only use AI for very vague concepts until I can have a competent friend make o…
ytc_Ugy5bgcyJ…
Comment
his first statement is already wrong ("give it more compute it'll just become smart")😆😆so I won't waste 1.5hrs on this. right now we are at the very beginning so yes, we are in the stage where giving more compute means a "smarter" AI but as we run out basic things for AI to do, we will quickly hit the flattening part of the curve where giving more compute might not make the AI smarter at all. he's very short sighted.
youtube
AI Governance
2025-09-08T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwGG9GrD0HtbcALUR94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzu_NpDTvcppsafCMB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFr_Obh6Ua9tL6UEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy7gRGnOxPJVnFDT9d4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyd5V0E-4R2jNLh5j94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxgUs95ydWzNcipWkt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwfDU1H8RXFcEcIkUh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEIgAV1xzrLq3NM3t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZr0TF0WFWLxOeelN4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugyy6coJER5weC7AiYB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}]