Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What are you afraid of if we can create controllable AI. I would spend a ton of …
ytc_UgyV576Qq…
G
At this point the people making these pro AI arguments are just idiots with a br…
ytc_UgxSwD0Y6…
G
The big lie of this video is AI outperforming humans. They don't have to. They n…
ytc_Ugw4Fd0lY…
G
So, they're putting AI into robots. Everyone wants a slave. So do I, Or my own…
ytc_UgzFs_IBU…
G
AI will only truly benefit the employer. either way in the long term its not fea…
ytc_UgyypECjj…
G
15:01 that robot said it's an android
Has anyone played the game Detroit become…
ytc_UgzaD7CJ3…
G
Hey AJ do you ever chat with ChatGPT 4? It blows my mind on a daily basis! 🤔🤔🤔 i…
ytc_UgyGMUsNp…
G
Techbros will do anything to justify their nonsense. The fact of the matter is A…
ytc_Ugzm2Fj-c…
Comment
Yes, but who are the good guys? Everyone thinks it is them. And, when we double down on tyranny or even extinction because it appears more logical to the AI? And, it takes them all of 2 seconds to determine this?
youtube
AI Governance
2024-01-03T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz6d42HrbAq02lx0-N4AaABAg","responsibility":"elite","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgxFyS_DO3-1AY37Myh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyATppEIfJ5tqxwWTd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJsFYsKTJW8xg5loZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSXQIBVlujrgvZmQh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwrTANlIY4QPvDiPwd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw04a8Dh5R7hn6xivB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy9sHJY5_XpiV2ipCF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy7dhnr5WxOwcfFAbh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw1zzEKMgTp_3260CN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]