Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There should be a way to censor emotions from the language model of these chat b…
ytc_UgxHo2fc2…
G
I only ever us AI art just to see what the ai will make, i never post any of it …
ytc_UgwJ6uYOH…
G
I'm with you on the topic of that AI art can be easy and simple to make and just…
ytc_UgyawTqos…
G
Slow slow AI will spread world wide..AI is good but sometimes not because AI can…
ytc_UgyfzQySZ…
G
artist adapted when digital art first became a thing and we will adapt again. Th…
ytc_Ugz0kuAbU…
G
I spent 13 hours in character ai i stayed up till 3 am chatting with random bots…
ytc_UgzQxM2Gz…
G
these things will not fix your problems, it wont change what u want, it doesnt e…
ytc_Ugxvr4hCj…
G
World leaders should just settle conflicts by piloting autonomous mechs and batt…
rdc_ohtr86s
Comment
I work in AI - accidentally - because I've been in IT and it was going that way.. In particular I focus on GANs for healthcare... I was involved in a seminar on AI risk - and with AI help it took me twenty minutes to write botnets that acted as GANs in the social landscape that would be able to generatively monitor how they move social responses. In another ten I showed how it was possible for a human working with this technology who already had a public platform to manipulate media and public opinion easily able to shift the overton window on any topic. - The point of this is that AI is already a terrible technology that is already being used against us [the plebiscite]. However - what I would also say is that the premise of the technology is amazing - and would not necessarily be as terrible without the fact that we are already a digital species and that social media and increasingly regular media algorithms are easily manipulated. 30 years ago messaging was slow and disparate. Now it is relatively trivial.
youtube
AI Moral Status
2025-10-30T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjxE3ed0-cXL54FoN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZV9HVtUByR0zeelx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyKzqdR2kM7HQ3gO1t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIognEwomLuypLOcB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxszYggu5E0cMVPBa14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyJGvcDzlxp7A9ZQEl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy5MfxipwIc8Coqa-N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz-8lEQo8xo8Ulw5Z94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzmPJ7kHypbtvukkSp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1Jp7u5tsO91sycdh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}
]