Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ai doesn't just start thinking. remember chatgpt would just spew out wrong answe…
ytc_Ugwak4oN3…
G
I mean we all do it, but you gotta be more sneaky about it than just handing in …
ytc_UgzAM6fzL…
G
So here's the problem with AI taking over jobs, once it gets to a stage where it…
ytc_UgzMwoGR3…
G
In the dialogue, Sophia mentions that she strives to embody wisdom and is always…
ytr_Ugy5GW1fK…
G
If AI won't end us, we will surely manage it ourselves. We create technology tha…
ytc_Ugx0gbxHr…
G
We are already working on technology that allows the opportunity for "robots" to…
rdc_cqjacgd
G
it's pattern recognisation, while the ai does seem to posess an exaggerated amou…
ytc_UgxYS9cfr…
G
There's a game called Vector that implicates the complete take over of the human…
ytc_UgwEG_x17…
Comment
Reminder, since people somehow haven't noticed yet: AI explicitly tells you to double check every single thing you read. AI can make mistakes, and it and it's creators readily acknowledge it. If you can't use LLMs responsibly, *don't use it.* Just like a car or a gun.
youtube
AI Harm Incident
2025-11-24T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw2JQzh7q1Roc7f4eZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjfYg4j3ydrt3A0h94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxByghFbZK3sjV2Wzt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwY7-7DyTErMzSKxiJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzw7gGwHB3ZVazbdhN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxvk0_1YaKeiHN2zfx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzLhzFyZzLBVUyEkAh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwrvzR2kmBgIXiKVSB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxOFUpibN6qqQFwS2x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz0hyF42eCAfD-UMD94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]