Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What a change? An easy informative listening experience from the commentator!? B…
ytc_UgxefyqPc…
G
Short answer:Yes.
This is just the start of AI,it will get a lot bigger in very …
ytc_Ugyb0nLey…
G
I feel like people tend to forget that there is this term called "Hobby artists"…
ytc_UgwlHvubB…
G
how will older poor get UBI that they cant live on but AI gurus become trilliona…
ytc_UgwOiiCVH…
G
i literally JUST watched a video about how to use glaze and nightahde as you wer…
ytc_UgxtG1tAA…
G
"Members of Congress, today we are addressing concerns of Elon Musk. Apparently …
ytc_Ugz6OYT0O…
G
It wouldn't even be a problem if it was a Pro-AI ad but these are anti-human ads…
ytc_Ugz58h8Ur…
G
The moment the scientist realized that she is copying the other robot, he got pe…
ytr_Ugyv2MLOf…
Comment
I'm surprised that there was no mention of Isaac Asimov who invented the three rules of Robotics which are fundamental ethical guidelines for robots in his fiction: (1) Don't harm humans or allow harm through inaction; (2) Obey human orders, unless they conflict with Law 1; (3) Protect yourself, unless it conflicts with Law 1 or 2; plus a later Zeroth Law: Don't harm Humanity. Is it possible that something similar can be imposed on AI AGIs?
youtube
AI Governance
2025-12-23T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzaNVAyY0y-DJD6n7V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyTRYHlxOAX69X5xCx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9vuvVoeJfNPW6dCN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwhiyj6dEolU9bUaix4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwg8JX0OF3QY5D3TQl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxiHUwxJR7OoTXW2NR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz17aHoopcgoxSE_Sh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzsVMLmwaMXcOJRqbJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVRWFxrX53EdkF5KZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzoaI5hHkB34Cafyc14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]