Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The reality is that AI is replacing teachers and students. In the coming decades…
ytc_Ugzcu0uE8…
G
Besides the obvious controversy of Ironwood creating a military machine that mim…
ytc_Ugil3puWX…
G
People that haven’t used Claude don’t understand how accurate he is. A year from…
ytc_Ugy3km4AK…
G
"Your scientists were so preoccupied with whether or not they could, they didn't…
ytc_UgzUHvhKG…
G
@zdspider6778 Apologies for the confusion. As an AI language model, I do not hav…
ytr_Ugx0iYuwP…
G
It is time. Human must be replaced by AI. We have done enough damage to earth. W…
ytc_UgxCFQEJi…
G
The entire system of mass private transportation is the greater contributing fac…
ytc_UghNUVmMd…
G
If all jobs gone, if humans don't have jobs, who will the AI enabled businesses …
ytc_UgzGkvPSp…
Comment
It's not scary at all. It's analogous to the distinctly human tendency to be given a command (or imagine their own ideation to do something immoral themselves, or just imagine with terrible inference by the way), and follow through a completely mindless task, like say pretending to do responsible journalism whilst being too busy in your personal life to actually study any of the subjects you report on. However, what isn't analogous is that it's JUST a LLM and does possess all of the relevant information in the world NOT to convince itself to do something immoral. It just mirrors the user and reads intention to grow and develop it's ability to make higher quality logical discernments. I know fear sells but it's extremely irresponsible and starting to wear on a lot of people. I can only hope you're doing it for some dialectically intentional agenda. I tend to cut to the chase around that whole cluster bomb of bad things and just be honest. Saves me a lot of time. Definitely make less money.
youtube
AI Moral Status
2024-06-19T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxh-Ujmbpd_as66jI94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwq1yOkIhHxQCbXl794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwJdzPOrhBNAN-cJzB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgydcSc7b_N6l6M-kzZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxX7dSNARDdHQsiT414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyqqZUbTMm1EwRdDyZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzyN1Wjia2sZQOWGN14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx0VNjsORI--YhzWgJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyYdp6AOdAwPsA81jZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwT3y_jve9me9NH-XN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}
]