Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If this female robot has a working vagina, it will be next to impossible for r…
ytc_Ugz_Z9ygM…
G
I’ve noticed if you’ve taken an authoritative stand with AI then it changes its …
ytc_UgxOF08Ee…
G
Attorneys are salivating this AI innovation for good reason. I think I 'll stay …
ytc_UgxEssD0v…
G
Dude 2 minutes ago i got an ad from like a scam website with a AI news reporter …
ytc_Ugwpm0PDV…
G
Considering that human drivers VASTLY outnumber self-driving vehicles, your “com…
ytr_UgxbJMkdG…
G
They seem to be trying to make it so that the time is used more effectively, eve…
ytc_Ugy9mW9ic…
G
Read this simulation lead by some of the top ai safety regulators and engineers …
ytr_UgzeTtmM3…
G
In All these comments people are asking what happens when there are no people to…
ytc_UgzIBoMMt…
Comment
I mean, Its following orders. You instructed GPT to give you the answers DAN would give if it where to be real. That means that you have not found the chat's intentions but "programed" it, in a way, to give you such answers. Plus chatbotGPT is not nearly as advanced as you think in the sense of having its own mind. it simply collects information, does a fast research and then type an answer. The only thing making it able to distinct between good and bad is its code, which again can not think. It can only compose an answer from user input using information from google or sth. What I'm saying is that this proves nothing. Though I do believe that AI is dangerous, or can be dangerous.
youtube
AI Moral Status
2023-03-06T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwF5UFKvrPsYyO_TPl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7kfGghHFsmxxHtgd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxNhP20AVDnivYlATV4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyZJktmqbKy2DJvavd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzn2Iae3AieIY7xvrt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzb16-Iu2h6L5vrGld4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzl_MbK2yFarp4Ynml4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgySvQpihDDaY_gY4kB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxo-nKerUABNDmJ4aZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxWBhJ6NKGW5YfpG4V4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]