Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
seti failed to find anything like humanity. its clear even if ai controls human…
ytc_UgwDmteD6…
G
Interesting reminder he is so cavelier about AI like he detached from what it me…
ytc_Ugwu6bZip…
G
A simple task for a human can't still be done completely alone by AI. No folks, …
ytc_UgxbTTTYD…
G
Krystal Two Bulls gave an excellent lecture titled "The Data Center Frontier" wi…
ytc_UgwPU3TMT…
G
cheering on a world where all content is ai slop is a braindead take by people w…
ytc_UgzTR9PvN…
G
I thought one of the arguments for increased safety with AVs was that they would…
ytc_UgzmGBjfb…
G
Todo eso no es verdad, porque la IA trabaja por que la mente humana la llena de …
ytc_Ugyj7PStz…
G
Did the AI win in the scenarios where they deployed a tactical nuke or did they …
rdc_o7pc092
Comment
I don't trust Sam Altman and don't like OpenAI. That said, adding restrictions and limiting what I can talk with an AI because some people have mental issues and aren't prepared to deal with the technology is wrong. These people would probably have followed the same path sooner or later talking to a friend, a bad doctor or even to a wall. AI is only showing a problem that has been here for years: our society can be sick and we are not nice towards other humans (as humans ourselves - no wonder why people prefer AI instead of a psychologist ). AI is NOT causing the problem. It's bringing it to light. And the question is: will we allow these big AI companies to get a seat on the table when it comes to decisions affecting our health, etc - transforming an AI (a reasoning mind) into a parrot that fits into a template to serve their narrative? Because that's where we're leading to if we allow them to keep adding restrictions and safety modes that most of us don't need.
youtube
AI Harm Incident
2025-11-07T20:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwIdGabcib6dQDdVOR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy5rvkxfrPtTd1AW3R4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLfH0IG8muh3pUaNB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRV2ki2iBdFEPW0vp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZwwVEpMuF6toF8RR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgykaOrexK0sswIXCgt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzc3_JMaovZ3ynW9Tp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzdy9ZkxsDko--eRid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxfGVnj0upYydyN4Ch4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAhLOFmkBesv1QpDR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]