Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate ai art its not art also being born with a gift for art, still, pure tale…
ytc_UgzioVi7h…
G
Whatever anybody says, with artificial intelligence it will always be, you bring…
ytc_Ugy1J4d_G…
G
No more jobs just people using ChatGPT? No more money just people using tokens? …
ytc_UgxGZbJ0i…
G
Sounds like South Korea govt is a big messed up. Imagine if all teachers and sol…
ytc_UgwH2x9DT…
G
My opinion is that I think ai art is fun to mess around with when bored, or mayb…
ytc_UgznFMkeE…
G
Interesting view, lol
We are no nearer AGI than we were 20 years ago though, AGI…
ytr_UgyW_vlv-…
G
both the process and ethics of training LLM AI(the basis of all image, text, mus…
ytc_UgzW4p0R7…
G
I have created a feature on a platform that I created, where the platform gives …
ytc_Ugzu_R1xT…
Comment
Bull on your clickbait headline. Any accidental AI data breaches so far have been human errors. This was a test and it doesn't prove anything. Probably best to do more analysis with an independant company at your own expense. So bottom line you're telling us China is investing billions in AI alignment and we should too. Hmm! The National Institute of Standards and Technology (NIST) has the AI Risk Management Framework (AI RMF). The DOJ and Homeland security and many other Government Agencies also provide oversight. I really question what your true intent is. I can see your grooming us with fear mongering. Have fun with that, the left has already lied and murdered millions with the jab. You have completely lost the trust of the American People. Sorry!
youtube
AI Moral Status
2025-08-23T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzpJM16cXJj7RdyXZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz-ibpMyHjVvQQO3q94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyYMElQFK6FE36adpF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyJUJBKo0y7BTUDMSp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5RyHpWlofN_hPC5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztGHFa-aWRqMLBHox4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxbm1908YYWU7gAFAp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwKA_eEdDGVVN0c9fZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8IKq9MY13R9kK3hF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWeeS7pNNRc7Jvq6N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]