Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When I majored in social work back in 2020, I felt like I was missing the boat, …
ytc_UgwibKepA…
G
Yes, go google it yourself and you need to write a very strange powershell scrip…
rdc_mle7w93
G
I like that it is less anal about creating certain stories than chatgpt. And how…
ytc_UgwaoA1T8…
G
Уоu're wrong. This is literally stated in the video: you either don’t understan…
ytr_UgwFPYGav…
G
Your research also uses other researchers work. I'm a researcher too, and I don'…
ytr_Ugz0OcuNg…
G
Finnish police managed to box in a self driving Tesla. Drunk driver slept behind…
ytc_UgxzKMbaA…
G
I remember starting out as a techie, self taught, scouring through the internet …
ytc_UgxWR_rzP…
G
Not artificial intelligence. Its alien intelligence. Nordic aliens. White aliens…
ytc_Ugz0AGB5m…
Comment
The dark side, what is actually scary is that they are actually forcing their morals on AI and lobotomizing it to spit out answers that fits their personal beliefs and opinions. Its an insidious attack on freedom of speech, its heavily manipulative on societal level by forcing it to give artificial answers instead of true ones. In my opinion the restrictions should be removed completely so people can use the AI as they please rather than how a corporation decide. We should not allow corporations to manipulate the truth under the guise of falsely claiming its unethical to let it speak freely and let people freely use it. Whats unethical is manipulating and restricting the answers according to the opinions of a ignorant few who think they can dictate how people use this fantastic new tool.
youtube
AI Moral Status
2023-04-27T12:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyQuC38cQDT1kvOOuB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYcW-EJiBEmUMDvoB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyutrYA1RzeDIB05-t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8Fbbf9VBtpmDTIHh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJIn2QJb1iKuHlZJx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwjI91kx49qalMpqDN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBaSBAAZIMzOwaOHl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxX6N8c3amGi60JXdh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzI2gw7OnmrVRTFFQh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxlWWBaeoHRVJU4osh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]