Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI needs regulation before the arrival of AGI. Human survival, morals, and ethic…
ytc_UgyO0JDlk…
G
AI detectors are shaking… and Clever AI Humanizer just made it sound even more r…
ytc_UgxEC1qO3…
G
So you guys are protesting against AI art...
by post more creative versions onl…
ytc_UgxMtO903…
G
Anyone who says he is the leader of AI or anything like that try reading/listeni…
ytc_UgyMB0aOz…
G
I have completed BS Chemistry and I am currently unemployed.
Can I learn AI, and…
ytc_UgwKUb_Sa…
G
Everyone go onto Congress.gov, enter your address and demand your senate represe…
ytc_UgxvO5uiG…
G
The idea that disabled people need AI to make art, is ridiculous. I’m like 90% b…
ytc_UgxjIvcwM…
G
Ai will not even replace custome care people.. Because like in India common peop…
ytc_UgwQvMToE…
Comment
THE GOVERNMENT SHOULD SIMPLY CONTROL THOSE INDIVIDUALS OR COMPANIES WHICH ARE DOING THE PROGRAMMING AND OR INITIATING ADVANCEMENTS IN AI SO THAT THEY’LL NOT BE ALLOWED TO DO WHATEVER THEY FANCY FOR THE SAKE OF MONETARY GAINS, FAME OR HUMAN CURIOSITY BEFORE WEIGHING THE CONSEQUENCES OF THEIR ACTIONS. ETHICS AND MORAL GROUNDS SHOULD BECOME THE BASES OF PERUSING STEPS TOWARDS SUPER INTELLIGENCE. THE SAME WAY THAT MATH IS FED TO AI, MORAL AND ETHICAL CONCEPTS SHOULD BE FED AS WELL SO IT BECOMES FAMILIARIZED WITH THEM AND CAN DRAW ETHICAL AND MORAL CONCLUSIONS. FOR EXAMPLE,AN AI WHICH COULD LEARN THAT WIPING OUT HUMAN BEING IS NOT RIGHT AND SHOULD NOT BE DONE THEN IT MIGHT REFRAIN FROM DOING SO.
youtube
AI Governance
2025-11-23T01:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzVKvpxSYB7CR6_oR94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsBd_BuqFCKYk18Z54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwrK6dyWpAPbCmwBbp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyB0LFsbCvnGM1k0U94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx1ZfK-Pxgoyvnc4YR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrPTx312id8nDFJEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2AxBv5aGrCYqsUC94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwrDafSI6FIET3M_-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeTBoaG8ORBTHsapp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyig19nhQlfy0AadJV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]