Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Perfect example of "uncanny valley".
This robot is trying so hard to act like hu…
ytc_UgiXdeg1k…
G
its not made for good . that's why they gave it to us. it will be our fault and …
ytc_UgwPuF3LF…
G
How nice of these AI grifters to provide you with free content for your videos.…
ytc_UgzSOAtL7…
G
ChatGPT is crap, for sure. It doesn't compete in storylines. It's very shallow. …
ytc_UgxI7ZpCs…
G
Look, I don’t even like say this, but does aroma de look up look like humans but…
ytc_UgyLMAM_S…
G
Stop using Chat GPT...we are literally losing our common sense because we're not…
ytc_UgwzhW5Xq…
G
One question I would like to know is they say that we need all these data center…
ytc_Ugyh0HDxu…
G
Labour jobs will be difficult to replace with robots 🤖. For example paving a new…
ytc_Ugxwu9IRO…
Comment
What about Isaac Asimov's 3 Laws of robotics. Asimov was a science fiction autor writing a lot of stories based on the 3 laws and a variety of changes to them.
May it be possible to implement them in an AI system , so it may not harm life (humanity and creatures beyond that definition).
The 3 laws for reference:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Governance
2025-09-04T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxcmMxowWMKfhjSbjl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugywj-elWMub6wDAXvh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzn0917BPJrCOy6Je14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugz0NyPdsw7xyoN8mEJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxM2leIvNBDE5Xpzpt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwulza7Tr0OVRFRfMN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgygTSyN9dM6MTs5aCR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwsr_jFyNuieyV734l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxhcs-kqP1djLnH6qF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzYdggTRxdrrEMnsE14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]