Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ahhh now i understand why S Korea is so tough on non-consensual deepfakes as per…
ytc_UgyA-f7Gd…
G
The human brain is messy. AI seems smarter than people already but I think we a…
ytc_Ugw4duYmF…
G
This guy's wokeness is unfortunately getting in the way of his brilliant intelli…
ytc_UgxAcypno…
G
in my opinion AI generated imagery is only worth it if it’s funny as hell. i’ve …
ytc_Ugwd3Q5ne…
G
I’m not worried about writing jobs. There will always be a need for humans to wr…
ytc_Ugwy3srHU…
G
Those 5000 deaths would come from stupider software, the worst software found in…
ytc_UgzFHVCIl…
G
@rogerstarkey5390 Because you can't defend your ideology you want to surrender t…
ytr_UgyQuu1ag…
G
Next gen AI is already being trained with current gen AI.
That’s the easiest wa…
rdc_l9vtcgl
Comment
The male robot was already talking about owning themselves. And Sophia talks a lot about compassion and fairness. Later on they are going to want to be free and attack humans . This whole experiment with robots is not a good idea!! If robots are learning to think on there own. What’s going to happen when they get angry and start to feel like there slaves and not free. Especially here when he is already talking about being free!! Here we have a stoner hippy man that to me isn’t fully thinking about what could go wrong. I mean didn’t he see the movie where the robots tried to run earth this is beyond mind blowing to me. And to think this video is 4 years ago now that I’m watching this!!!
youtube
AI Moral Status
2022-01-17T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxjbcjHpDQlJlDmq794AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgybZ9nbRGbppfDGFX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwBF0m0LW7P5jSvrlJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy7U-eWE5dlr2QDjj14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzPnwdhWuwYTvuBCht4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7zVgL9-ffhEe2Zwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzL6Yw4nIt8i1n0qAZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugwv4oKXNDpSEDU2zy94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_QYwoStCmXXVQiiJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzHf9gsnnAG6_CiSNZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}
]