Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI agents we use current wrote upgrades themselves and some says don’t vibe …
ytc_UgzUPqeUH…
G
yes that is ultimately how it would turn out, however the average layman has no …
ytr_UgyN04GH4…
G
Maybe we can all go after the AI farms and globally just take them down. It’s ob…
ytc_UgxKLVA6T…
G
Exactly. And they actually did just drop some of their core safety statements...…
rdc_o7cgrx6
G
The use of AI is to increase productivity. Companies don’t think about people an…
ytc_UgwJ1-xu-…
G
Its more than just truck drivers and assembly line workers that are going to be …
rdc_fcrnz62
G
Art is are ...as long as you make it and create it in some way ai is stupid and …
ytc_UgyEuGx8N…
G
No, I don't. But I also think that real people are being impacted and nothing is…
rdc_lr7s9aq
Comment
Ai is likely to become the worlds most dangerous weapon. Lets be clear here. what government on this planet would not use a superbrain to their advantage? We've seen this scenario played out in films. WarGames is a perfect example. Ironically. Sci-Fi is never too far from reality.
youtube
AI Governance
2023-05-03T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzOOkNUeiJb5RERBNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzWvKNpL-JabbwLHXp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKPdwDxPt-dwR2eI54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzNzkkEZ_duoZAqJlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxRBo6I6lkl10vX0v14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8VSIIpBnOTQirtRV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytqWycwhJdXVDzJRx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwcTyHuqPVf3LZatdF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw8VAFy1zuSMsVpmWt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy6-n7UQKrPTpFHTr54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]