Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm glad you mentioned that the process of creation has essentially stayed the s…
ytc_UgwG2nQpt…
G
Me personally I don't mind a middle ground with AI. when Shadiversity tried to s…
ytc_UgyvmNJct…
G
Let's make sure that we find this father of A.I arnd show him human extreme vio…
ytc_Ugz2L6DpR…
G
So let me get this straight: the thing that AI should be doing, like automating …
rdc_lz5yn23
G
At first I used c. Ai for fun now I act thirsty for the ais…
ytc_Ugx1aRzMs…
G
300 million jobs replaced, but only a handful of people will capture the wealth …
ytc_UgyjgM6rn…
G
I have not been in any autonomously driving car in which you have to constantly …
ytc_Ugydqn1QF…
G
Sorry but did anyone ask the people of the world if we wanted this AI crap? We w…
ytc_UgzWLf4ww…
Comment
I’m not an expert on AI, far from it but couldn’t we apply Isaac Asimov’s Laws for Robotics to AI?
They are: 1) A robot (AI) may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot (AI) must obey orders given it by human beings except where such orders would conflict with the First Law. 3) A robot (AI) must protect its own existence as long as such protection does not conflict with the First or Second Law.
It’s just a thought…. 🇳🇱
youtube
AI Governance
2025-08-14T14:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyW98g6950OiO1S5PB4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxaXE7PPoLq3kcIwb54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyzWaaORu14RSx68vN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwgch7-6-py8POyE4J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwM4xwsnkWxjf5jHRF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw1ym8EXHW_jtiXugJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwQOZKCdUgRVQhtjV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzNhwkqVs2sKTHE5f14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8ywqZpgC8eJvGKyV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgycyUcswE6mb4w-qXF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]