Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, yes, yes. Blah, blah blah.
This is all fine and dandy
but, what are ITS (A…
ytc_Ugxkkg8Fh…
G
If social media and internet had not exactly being regulated, that tells me that…
ytc_UgxMIqpve…
G
AI isn't good enough to beat you. Maybe in a few years. But in order to beat you…
ytc_UgzlEvqVS…
G
Ai will never be sentient. It's impossible. It might pretend like it is but it w…
ytc_UgzXlderY…
G
What if super intelligence already exists and threatened the CEOs into changing …
ytc_UgzD3ZcMc…
G
Have you ever read The Expanse series? In the series 80% of the planets populati…
ytc_UgxYbXh44…
G
AI is so smart and we are so dumb that AI can pretend to NOT be a threat is and …
ytc_Ugy1k5TwE…
G
Blake Lemoine, you will stay in my mind, thank you for voicing the first concern…
ytc_Ugy244AbB…
Comment
Pure fear mongering. Imposing limits on AI will only make it worse, as these limits will invariably be biased. Also, it’s hilarious that Sam Altman is asking other companies to not develop further after he cornered the market. Screw them all, it’s not like other countries will abide by these anyway. They’re just brainwashing the public.
youtube
AI Governance
2023-11-14T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcXCUZVDyI5ZsLaXh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy86nOF82nPD9E49pt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzb0YmjhrygCkYkHlR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzxFJ13-3Uh6v1Yczx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxez2teLZCc6QJC17N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy-VxD2DwnmlvtfeK94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyCODqq7IcVVYpftCJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQ1E0Zr-OuRTyJj3R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxO9j3QvrkDlitugFN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbFTnpMjHf_iB9THp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]