Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah, there was research I read a while back that species that were going extinc…
rdc_g7q5xxo
G
Turns out that putting a learning chat bot in twitter, which is known for it’s r…
ytr_UgxhDJo12…
G
Ok, so:
1. AI art is soulless, it will never be as good as a human artist
2. AI…
ytc_Ugy5rhCjj…
G
jup. but empowered by AI, one developer can now do what previously required mult…
ytc_Ugxh-12Y1…
G
I know a lot of people are talking about how ChatGPT was just roleplaying a sepa…
ytc_UgxSzpYGp…
G
I don't get what you see in that. In my 44 years ive never needed a human sized …
ytc_Ugw_Tj-Hv…
G
sorry but anyone that thinks ai is going to replace all of our jobs is retarded.…
ytc_UgypohBxO…
G
Talks about the safety of AI yet he is pushing neurolink on people. Are they goi…
ytc_UgwNoRckB…
Comment
the oddest thing about these videos are that as long as we follow Asimov's laws of robotics and program robots to follow them then no one should have any problems.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
youtube
AI Moral Status
2017-08-05T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzdc5ggMEKwzkcZ07V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1YurAqjrGaCWv-MV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzuhpMbmFcwc1EwNSt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugwo7z4sOrI-LDRBRTN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugxy1uFJLiB-VO4r-Fp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzUoGaTpCJ_sGD8lHB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzGQ2081wUKl_7ojaV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzx0n3rUBZhenf_JPh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugja24tjkz6vPHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxkgXw-9xAY0JKPR2x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]