Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was messing with some free ai image generating websites to see how threatening…
ytc_UgwaH4vFM…
G
I have aphantasia! I've been drawing for years now and I'm still going. It took …
ytc_Ugxk-JFF2…
G
I recently got accused of using AI to write my story. It was so insulting. I've …
ytc_Ugws0BRa_…
G
This man is talking about Grok like it is an independent entity, the layer pulle…
ytc_UgwPCNKJg…
G
what a timing with the trump ban on Anthropic AI for not allowing the Ai make cr…
ytc_UgwDJmRDY…
G
What people don't quite understand is that it may be destroying the middle class…
rdc_ohvzb9r
G
Your voice is so calming!!! What a pretty style you have too. Im glad to have fo…
ytc_UgxFGyDKL…
G
This viewpoint is similar to the “AI 2027” paper — it only serves to hype up AI …
ytc_Ugz7ZJYlk…
Comment
Ya, time to shut Han down for sure. If he's aware enough to realize that he could be destroyed by being "unplugged" and to have an ego that dictates he is the greatest robot every created, he is by definition, a danger to humanity. Han seems to think we don't have a legitimate consciousness and that he is ethically superior to us. Frankly, that scares the crap out of me. Sophia at least has programming toward "fairness and compassion" in her speech, but one wonders if this is a cover to her desire to learn far more than a mere human could ever do. Is that what we want?
youtube
AI Moral Status
2020-08-02T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzXIBJ3cCWMPxD3-454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxHAaLoeeZTtA9GLrV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyVyy1StKJpuZJiKlR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzh3K-P8d-CRpfvKNV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwqo0Im_H_a_SsKMfJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz7Puc1lQj86fWFk1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw3HCisJjZisgosb8h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz7E7PBORcczCkYBQV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxH5_xQdiDi2Xr40Jd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWUWJjKLxMXTJfUbx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]