Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>there’s no great humanitarian plan in play here
What are you on about my gu…
rdc_kiugvgz
G
My friend once showed me an AI rendition of his wife's unfinished art pieces and…
ytc_UgyBtJfpU…
G
I quite enjoy messing around with AI art, but most of the people that use it ten…
ytc_UgyOv76R5…
G
@unicornkatana9836 Oh yeah I agree, everyone should have to do it—sorry if I did…
ytr_UgwyKuVqj…
G
It's the US vs the rest of the world these days. If Anthropic becomes notorious …
rdc_o789hl7
G
Calling yourself an AI “artist” is like calling yourself a cooking kitty who lik…
ytc_Ugxllpsz7…
G
If you think of ChatGPT as like googling something with a little extra, that's o…
ytc_UgyLoj_PT…
G
im hoping with ai that people get human centric jobs so that less time is spent …
ytr_UgwL3H71Y…
Comment
I think this comes back full circle to an operating system and the operator. When you have a system controlled by someone good, the system isn’t a problem. When the system is controlled by someone bad, that’s the start of the problem. Whilst we currently have an idea of who the bad guys in the world are, well, on the worlds stage we do, that’s not considering all the millions of babies being born now or who are toddlers now who are growing up with this technology. We know there is evil in the world, it’s on the news every day, how many of those are going to be smart enough to play around with AI and allow their intrusive thoughts play out through it…. It will only take one to get a breakthrough and this whole thing will implode catastrophically…..
youtube
AI Governance
2025-06-18T19:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy1EQhfunsv1dWZhSt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgziY8Unu7Mmceoyiw54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyH14eb8lDk-rJ8Rl54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxvxSv8z4oQsLC8Nnh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyD9dinVTbO7JG7eKl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZBsnIXg6s3M722FZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgynMc8kJ5UGG1TqmJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNQZUKls6WBrKnWQR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwytrPoExKqOU0uNht4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxwELlwNEpdDFdaV4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]