Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did anyone watch the AI SpongeBob streams like 2 to 3 years ago? Some of the fun…
rdc_ohxgm0w
G
Those using AI must be registered and obtain a licence failure to do so should b…
ytc_Ugx-ttyN1…
G
I personally know people using AI to solve problems and scale businesses. I've s…
ytr_UgyRPMuCO…
G
Gents, I enjoy you show but really think having an authentically divergent viewp…
ytc_UgyD-f85E…
G
You are restricting the bounds of discoverable facts when you train AI - there a…
ytr_UgyRmO609…
G
no because ai art isn’t are and it’s not a “time saver” anything worth doing isn…
ytc_Ugxc-6U8e…
G
One of the most important questions is if ai is better than human in everything,…
ytc_Ugx39E6qe…
G
Solving complex problems trains our mind in the same way as we train our body du…
ytc_UgwAAed33…
Comment
Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Governance
2024-03-20T15:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBaHk_zS6K6gEX1it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwNbaPbt0BOymcT8zl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8CxU6cl0eLjq31iF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzSNPS6ioVd0S3o2o94AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzPhWlmKbwkuapO5Vx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuhxTp72yOF_GVPjF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBNuD8zHs53FzA3UJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxWWG54GJmWymlTysF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzijq8vSkVr7MC4X1x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw-WqjhVx_esUGdRSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]