Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I would rather be ruled by AI robot than by an oligarch who cares more about m…
ytc_UgzJDUmBM…
G
@xXjezterXx That's not how modern AI works. Old/classic AI used decision trees, …
ytr_Ugxg_sXxd…
G
Sorry but try script what you are using is not convincing though u are explainin…
ytc_Ugz-93Y1S…
G
I just phoned my bank and spoke with 'AIDA' - (Hail Hydra)
It was the first time…
ytc_UgzrtNkMu…
G
That's why I never supported this fucking high tech and ai and I don't know.....…
ytc_Ugw0Ilv2q…
G
When you understand the psychological programing of institutionalized racism, th…
ytc_UgzwMzCIv…
G
I would never use self driving software for my car. I do like to drive but there…
ytc_UgwRmpHth…
G
At 20 minutes, the way that AI is so alien to us because it doesn't have a human…
ytc_UgwIqHqzQ…
Comment
That's kinda the issue. The assumption is that consciousness or being able to display high precision in its given tasks are necessary before Terminator scenario, but what AI safety research is actually concerned about are hallucinations and manipulation - things that AI are already showing signs of, and the scenario isn't "angry AI" or "robot war" either. Check out Robert Miles AI safety or Rational Animations for why this is a thing some pretty smart people are saying, and why it's a field of research at all, and don't just focus on all the stupid CEO money people who're saying this because they want regulations as anti-competitive market strategies. If it's basically really, really good at carrying out a task and setting its own sub-goals without understanding anything of what it's doing or if it's even conscious of itself, that's very bad.
youtube
AI Moral Status
2025-10-30T19:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytr_UgwNnocC73XTt-CBXqZ4AaABAg.AOuvt8qpcsMAOwc5FiqaaY","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgyVGfRkyqRF4Y_sOqV4AaABAg.AOuvmS1kShPAOuwS0P3mos","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgyjfAyNFS3U3EpfanF4AaABAg.AOuvhcN_aL5AOuyddQugjE","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxZfrPepDhI-nffwUh4AaABAg.AOuvYTd4fpOAOuxaKjrLFS","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgycVBgeP0Y5ki-MYDp4AaABAg.AOuvI0SXrKmAOuzhr857ei","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgygTTORqN_u9t-6ASd4AaABAg.AOuv1gJ1pd7AOuxHCRQ-eS","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgyXde3EvAxPhtQc8gh4AaABAg.AUnASDG29jcAVDwgzbppJH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx6RA1SKARVqpeZAaN4AaABAg.AQum78joCfgAR9W1akHNrR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_Ugy89sKCfpInLZuDaNx4AaABAg.APU2F7BtBJxAR9_eCdWg41","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwD_8ldPliLhbZWB5h4AaABAg.AL4da-n0XM_ALYhALeFDg2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]