Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think people forget that the main problem is not that AI is a "tool", it liter…
ytc_UgyCGU27o…
G
To all musicians and singers, keep making music and don't let AI-generated music…
ytc_UgxDjxIhQ…
G
My writing style is so bad, it’s detectable through ai. My fanfic in middle scho…
ytc_UgwU_7WpF…
G
The answer is more simple than you'd actually think: a conscious AI will be awar…
ytc_Ugw_q2qIv…
G
AI is Sauron's Ring. "One Ring to rule them all, One Ring to find them, One Rin…
ytc_UgxXsvJYq…
G
AGI is 5 years away now? In the 1960s it was only a year away so now we really n…
rdc_kvfhc8l
G
By the time we make a strong AI, we will understand everything that makes us tic…
ytc_UghnMNcWH…
G
I'm not following how the robot crushed him into the conveyor belt. That box loo…
ytc_UgxyHlRQ9…
Comment
What amazes me when talking about humanoid robots and Ai in general is how people always seem to focus on “what they can do for you” or “in your replacement” and never “what they can refrain you from doing”. The same robot than can cook your family’s favourite meal in your kitchen can easily be hunting you and your family down for saying something “wrong online” that opposes it or human/oids that are in power. And don’t even get me started on the “wanted dead or alive” settings. Which will be obviously backed by law. You know, when humans will agree that another human loosing their life at the hand of technology is an “okay” thing for whatever reasons listed below…
youtube
AI Governance
2026-03-25T09:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxUZt2fZ9Rd1FIe_nV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgybUwWRHNDwv_lIouB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyceVaFeecBUcubkJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyiPIrKl0NwoLjPBTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCwSALBOK-O7k3Gwp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzboDfMadk67yFexrt4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzNemDzL_6wT-wt6fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAVlDLoONpTo8arY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxugldY5MFypj2pIuB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwKR6WeK1JguFdibxt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]