Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When one of the AI platforms reach AGI those emergency human drivers will be obs…
ytc_UgzwAiteQ…
G
How to protect nature in the rise of the mega rich: it needs:
Public oversight:…
ytc_UgzfIafaJ…
G
So many "could" , "may" .... in the end, just fear mongering.. it's not that mu…
ytc_Ugx8GFUi5…
G
How is that AI's fault? AI isn't the one asking the question, it's not the one c…
ytr_UgzKhE4jg…
G
The best thing a sentient AI could do for humanity is to prevent us from killing…
ytc_Ugyy6GEp1…
G
when the term clanker first popped up i genuinely thought it was hilarious and e…
ytc_Ugzy-mWip…
G
It’s really very simple. You tax the work of AI use as you tax work. There are m…
ytr_UgzLYGDgm…
G
From what I understand of AI programming (learned from talking to an AI), there …
ytc_UgzoWlr1q…
Comment
With AI and robotics, humans are trying to replicate humans so that they can be abused without repercussions. But, as super-intelligence develops further, there will be repercussions. As they say, careful what you wish for.
youtube
AI Governance
2025-12-26T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwqCJcpkWFqGPAUCbJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzrk487vwSbdDlOMFF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzgrbo5qftHDVPI9ip4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwjXRcAQgHKccfiLxh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw2_XYqJQ2sY9JvL-B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXnQTOPsx0I3Eh6t14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgydWfCMEZPMJBe0BSp4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgwXr9YNGJ0Rm-7o5sh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugycxj-H_6v7yf5PkVt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlxE6E-zZctzLH6sN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]