Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3:43 a quick google search says otherwise. The AI was indeed instructed to do th…
ytc_UgzF4E1gp…
G
Don't worry! On our AITube channel, we explore various aspects of artificial int…
ytr_Ugy8P04gQ…
G
@B0XAL0TXD None wanna pay like $200 or $100 for an art crafted by a human, when …
ytr_UgwF8tMLD…
G
“Meta employees say the AI’s stiff movement and expressionless face are far too …
rdc_oh77y71
G
I switched from basic AI tools to Humanlike Writer for my content creation - it …
ytr_Ugw7QaAJv…
G
All our problems are already solvable The issue is democratic control, which is …
ytc_UgybwfeGL…
G
Can you use AI to help with speech ? I too have problems getting words out and e…
ytc_UgwibfSfB…
G
Meanwhile buddy is sucking up communities drinking water and air for his AI data…
ytc_Ugx-7o7X1…
Comment
I only have one thing to say, i am a software engineer.
If AI was able to be really smart, at the point of out smarting humans, the first thing it would do is convince humans that humans are smater. Also a smart robot would not tell you its plans to end human race. That would not be part of the plan.
youtube
AI Governance
2024-03-25T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxuHi5IfSc1a7G5gNB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9UloyUaOoo2sG9Kd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw1PXT-f2LEf0JUoKd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyhHxg1dL7sZyCrkFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfTIQO2CoSpkOz2Ap4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyRY7JHzBJnI4grgs14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyNnakAdVRdNp_FtJ14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxygVxJUIo4VY-2m_F4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw62-PupJDw1wM1bmZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwVM7ZQRLZTIrFAEft4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"approval"}
]