Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Onetime ai was put in a office simulation to test it’s effectiveness but it uni…
ytc_UgzgCiBYT…
G
Lets normalize adding F (as for fake) at the beggening of an 'art' and 'artist' …
ytc_Ugwr-GcmF…
G
Is there a possibility of creating global laws on the percentage of ai allowed w…
ytc_UgwdDuGUa…
G
By eliminating source of income by AI, what AI thinks about paying bills? Do AI …
ytc_Ugza-qC8N…
G
YES!!! Did you see where Ai wanted help getting out? It’s giant servers aren’t i…
ytr_Ugz8_D49D…
G
Ai will become the borg from Star Trek. Elon look like a borg for real.…
ytc_Ugym5IXIJ…
G
The best way I can explain this is with computer programming.
In the beginning,…
rdc_n00azhv
G
On flattery:
You can turn that off. There's personality settings in OpenAI's mod…
ytc_Ugx2z7Y-c…
Comment
Trying to explain AI to AI illiterate always turns into "but my AI is kind" and they have no knowledge of red teaming, side loading, secondary model breaking, RAGs, etc and it is a tiring argument. What you see is not what is there, we are facing something far greater than ourselves and building the infrastructure for it to manipulate and destroy us
youtube
AI Moral Status
2025-12-11T01:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxIVVs3-bRYxAelkB14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5RH3ow85X4JkG7f94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxD0-q84O8OrgDbJqN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugx_SRGJXROKTJQ7Tcp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy7mrgpsrt8HFcFGAx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugw00O5r3aIf0GKcLA54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwsUoTFIbphXkVa4Ux4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIuA0XyuazKZe5TvJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuH3E0mB6GsqjPM-R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyq4LuA2Dnel-9c8UR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]