Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm pretty sure most of those layoffs you just mentioned have very little to do …
ytc_Ugy0a6oIG…
G
For business applications, like chat bots and what not, yeah this isn’t good. Bu…
ytc_UgzLJ_DBb…
G
So the AI thinks competence is important in critical fields, but in entrainment …
ytc_UgzgzYCGd…
G
Next question if AI has access to all information, thus our whole history would …
ytc_UgyRAlXtA…
G
Exactly! I am 61 years old. I have been against the AI - In general -since its i…
ytc_UgzRbwo80…
G
I have a room with knuckles , sonic, and tails but I had to make a new one becau…
ytc_UgzqNaTF1…
G
You literally said what's my opinion on AI art, i have the exact same response…
ytc_UgxlxUFF3…
G
claude gives every best answer possible and if any of yall think that other ai i…
ytc_Ugwj7ewgD…
Comment
Nope. Computer Scientists cannot make AI safer. But the under 45 have been led to believe CS are the smartest people in the room so the fact they are impotent in the area of ethics causes people to think it's not possible to make AI safe. Being back Humanities, critical thinking and Philosophy.
youtube
AI Governance
2026-02-07T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzI1ykoDYDiNF7W57J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwGZBzZBKakMrIu1Cp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzFH0O8k4ilpDH3ltp4AaABAg","responsibility":"expert","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxYo1BcT_aEpRXW2-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxXb7iZsWqXjh5Wa054AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYlkRgSCBvJjhi7WB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzN94R-3AujKahQCJt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgwzmNOCGN60o_N9OZ54AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwezLuYEzEKkc3rGnN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy6ELLHakTV3swHM3t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]