Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m sure the government has been using Deep Fake technology for ages.
Guarantee …
ytc_UgxPITfcN…
G
@TheFacelessTravellerpeople driving cars have hit people at a much higher rate. …
ytr_UgzHc3mQD…
G
It is wrong. God made human more intelligent than robot.please stop appreciate i…
ytc_Ugx9Upuvl…
G
There is a lot of stories on the internet about AI's going haywire. DAN is just …
ytr_Ugz4EmRqA…
G
When most people's jobs are replaced with AI and they are sitting at home waitin…
ytc_Ugy_GS32L…
G
So, we should clarify a few things.
Debt - they don't have a large sovereign d…
rdc_mddld64
G
I am so glad that nightshade and glaze exist. If governments refuse to make it a…
ytc_UgxL6arNM…
G
I implemented PNN and HMM many years back for real expensive products, and then …
ytc_Ugw1W-TVh…
Comment
I remember a saying of a person about AI and a person's expectations with it. I don't remember the saying word for word nor do I remember where I found it, but it had something in the lines of "AI isn't intelligent, because it's only telling/giving us humans what we want to see/read."
Emphasis on the "want" here because the same person further explained that the user, the human, probably wanted validation for his/her theory. In this case, the teacher probably had come up of an theory that the students' essays are AI generated, and with ChatGPT being notorious for making false claims and false information, it probably validated what he was thinking, thus resulting in remarking the students' in their final grade.
youtube
2023-05-23T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxJjRqNbheOcTgqCWp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxhi_Wdf7nNd2xXloh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwr5ViXnVHuX8orlWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzXIurB74oPt9jdEmV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAM6fzLGER_-IaekV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKc57PCglQwjW2QzV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwI07zAmgGHaeeJRwF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzTDJOB2FWHGcf482l4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxIuLYHSGYtD1w3ZLd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxMI7pj2kxO70UDeMZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"})