Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI thinks, but when you are selective with what it can use to learn from or deli…
ytc_UgxCFCH_d…
G
Wait so we are supposed to just let AI take our jobs and expect the elites who p…
ytc_UgzIWIF1Y…
G
AI NEEDS TO HAVE A DISCLAIMER- INCLUDED IN ALL CONTENT period. music, video,
A…
ytc_UgyvwrIzZ…
G
Ai Bros Talking about how AI can actually solve a monotonous work that noone in …
ytc_Ugxrs8llC…
G
Dr. Jain simplifies things so well! It's crucial for brands to monitor AI mentio…
ytc_UgyuShBlV…
G
About what you said about talking to ai (that’s what I think it was, I’m not rea…
ytc_UgyQ9dGQY…
G
At the soldier one when the answer came out, why did sambucha look like ai?…
ytc_UgyEsNZWW…
G
Na Micheal Jackson e small brother entin Micheal na front e dey go dis e na back…
ytc_UgzZlb9Qq…
Comment
if AI's are so smart, they would already have told us that Mars can never be terraformed to become earth-like. It would already have told us that humanity is on the course of trashing the only planet where humans can reside, leading to the great anthropocene mass extinction. I would have told us that no political party is presenting voters with a way out. It would have told us that the UNIVERSE is now presenting humanity with a ultimatum, like the ET's in the movies "The Day the Earth Stood Still", one that transcends all boundaries of space and time, stating that our days are numbered if humanity does not change. You can take that to the bank!
youtube
AI Governance
2025-07-17T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxiUdNPCFp8AM1O8Kh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxFWeC22fPq3Qn4XbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEc4Q3t7u2lHCmLYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxxM3wM6qmx1c1BxUh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyotiU_Ps9wq5PO3kR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyn0v0w6I3Y3y3DqSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzvWuQ3WgPm44mmrY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAZjaVqqwqpcUno7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz328f_VwAUCwoPkzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywnAP2DPq1hAhTabF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]