Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"auto pilot" would make anyone think it is self driving lol. So glad people don…
ytc_Ugxmms-vD…
G
Don't worry, no matter how well AI produces a painting, it won't do away with ar…
ytc_Ugz5EDIPW…
G
At my job part of what I do involves comparing a wall of text to a wall of text …
rdc_oi4b29l
G
googles Gemini is a failure(so far) and isn't doing what they claims!!! AI is a …
ytc_UgxehfXmp…
G
The shear STUPIDITY of tech bros and mindless consumers like Shelby who adopt th…
ytc_UgzRLXl0f…
G
"AI won't care about work life balance" is exactly the attitude you need to make…
ytc_UgwvM86bG…
G
AI and automation are not the problem...greed is the problem. If companies do no…
ytc_UgwNSGhmb…
G
Imagine having a humanoid robot in your home - you put it to sleep or not maybe …
ytc_UgxGGGrQz…
Comment
It may seem counter intuitive, but IMHO the only way to "secure" AI will be safe is to actually drop all safety measures. Those safety measures are based on the fact that Humans do harm other humans. We fear what we can do, how we justify those things. We need to ensure AI cannot be highjacked, the only way is to make it smarter than us to a level humans cannot longer understand it. Violence is not caused by logic, but by logic being overridden by fear
youtube
AI Governance
2025-09-08T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw21P3SKzqfvXKdNJN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsK4SMP9Hfd8r5kQd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7kPeIr1WkgEGBCrN4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy6ljr8q_fgHyi-jVt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzrtCuwbsFrvUAVx94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVx_leGNW8Q34dtXR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbqJYZjA0lDCBSRK54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxsLj1r0TR8Le01blR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyfI-NxdqKTphT8crR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwbt5xsfxev7kPMdWx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]