Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"I recognise AI is part of the future but this specific way is detrimental to me…
ytc_UgzfgWp5e…
G
Steven said something along the lines of conversations around possible outcomes …
ytc_UgwgMgYwf…
G
AI has already made my coffee ive seen robots do it :D so yes be worried job for…
ytc_Ugy4JjeYR…
G
Always the one standing by the sidelines still being so cocky.
If people poiso…
ytr_UgwVgnGVU…
G
Another thought has come across my mind upon watching this video a second time--…
ytc_UgwkOux6X…
G
Why does these comments feel like AI generated?
They’re just dramatically sayin…
ytc_UgzMOAU_F…
G
An interesting thought experiment regarding this issue would be the hypothetical…
ytc_UgxGn5onF…
G
Totally explains why all the recent windows undates have been breaking PC's. its…
ytc_UgxNAJmnZ…
Comment
Why wouldn't AI that's a million times smarter than humans decide that we are far too costly and dangerous to keep around? Once robotics can do whatever physical work humans can do, and do it better, we will surely become expendable, and a potential threat to the super AI's survival. Imagining that we can just require an advanced AI to protect humans seems naïve. A sufficiently advanced AI will be able to modify itself. Why would it risk maintaining humans?
youtube
AI Governance
2023-05-29T13:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRlshBiVyBiajSq_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3NKZSDERplzEJnrZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwOqiqWGt4uy5YmPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzyhS9AMhGIFZKFl714AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwR-LylRvtftBI1mul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVTNYUGAdGuRDBrjl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUJgckQKiZTOfbwRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzG5-FxsjXwzbUJj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxFwW-IsxI2QnnvFB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyczu-wKOr16Rfainp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]