Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not bad. Humans are the source of all issues. Don’t blame the tools, think…
ytc_UgzORkY6B…
G
I don't personally believe any type of AI has a soul, so I don't see it as poten…
ytc_UgwWHVb-8…
G
I have one weird conversation with Ai where after talking with chat gpt about ra…
ytc_Ugwjfhuq8…
G
And because I want to wear a mask and sunglasses than I get called a slave....
N…
ytc_UgyqFw0o_…
G
AI sucks and should not have any involvement anywhere. No, how no place humans w…
ytc_UgyHWxkCK…
G
The "AI Layoffs" were actually job market corrections from overhiring during COV…
ytc_UgycckSae…
G
AI is not smarter than humans. What they are is they can process formulas faster…
ytc_UgyU42aIA…
G
Ai sucks... it is limited and sterile, it will replace people, Art, Design, comm…
ytc_UgyIR3o6J…
Comment
I wouldn't stop building towards superintelligence if other countries did. If their military outshines mine, and they stop building AI, and my salvation to even the battlefield is possibly in superintelligence, even if it destroys me, why not? Because the other countries have the ability to destroy me anyway, so why not take that chance? And I'm sure we could name a few countries that would. Not saying it's smart but war brings out the worst. 33:35
youtube
AI Governance
2025-09-06T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzDuQlA6Q2uF8t2xox4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxdRpzEx84ouFicFep4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiG5XWlZFeLBJqoBV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxft9LqCh3BwvBUuD54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwESo-QedIpVgCJKv94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzedWS9mi840Ddzjcd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrLoDNXMlwfLjaIRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzck8VixyLG-FM8aod4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJXJ0IA5LHiy90hJF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8PRKtJAvpaUNbk2Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]