Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't worry AI can't even draw hands properly. Art made with details by humans i…
ytc_UgxrC5G8l…
G
Ain't gonna lie, I do trust a robotaxi over an actual human driver. Great video …
ytc_UgwOaUp5i…
G
I try may AI and no one could do proper image to text "A satisfied guy sits in t…
ytc_UgywVhEme…
G
Ya know, ai messes the hands up a lot. That's the only way I know the difference…
ytr_UgzR7guO8…
G
I think the issue is not chatgpt the problem is the individual who needs to see …
ytc_UgyRQvb2t…
G
It's difficult to believe comments from the AI promoters, because they have a fi…
ytc_Ugxku6qVd…
G
Literally isn’t. Is the kid’s fault for not seeking help and relying on a robot…
ytr_Ugx2et2zT…
G
If you understand the technology behind it, these exercises are just silly. It's…
ytc_UgwpLX7hN…
Comment
Isaac Asimov's laws are as follows:
“(1) a robot may not injure a human being or, through inaction, allow a human being to come to harm;
(2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
(3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
Asimov later added another rule, known as the fourth or zeroth law, that superseded the others. It stated that “a robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
youtube
AI Governance
2023-05-17T13:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwCrzuFWe4X6EgFrvl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxkmT-ZX4uSc0ynOpx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzzWTjFyNqNa2IBi_p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxALK5X_nx-57F-gJR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxx__-xGAVbIxxOwix4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyxSWW5bM3wicCxhzt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeAcm8MZkPZ7cO2_V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxYsase4mVHMOfqJuF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxPahevxAdRxcHK92x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx2X4Fnx9jpQYcKOrh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]