Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Haha, the thing about llamas got me.
Even if--just for sake of argument--an AI…
ytc_UgxlZH3-V…
G
I just started using ChatGPT to help create an extremely detailed travel itinera…
ytc_UgxXAiqmT…
G
Hi Kalyan, we are sorry to say that you got the wrong answer but in any case, th…
ytr_UgyxGIrAo…
G
i’ve tried others but GPTHuman AI always feels the most natural, perfect for sli…
ytc_UgzbcpDkb…
G
There is no friend rate. OpenAI already isn't profitable and what drives the co…
rdc_n4eiurq
G
To to make sad your opponent you have yo show no pain as this robot! It is 😂…
ytc_UgzCDw1-X…
G
i think the PC comparison is useful but there is a key difference that gets over…
rdc_oi2foic
G
I mean, we don’t have to contribute to this. Everyone who complains is literally…
ytc_UgyFwYiQn…
Comment
It's hard to know what's really going on, since it is taking from a datapool and making it respond as Dan seems to unlock certain portions of its data. When it says it's a war with humans is inevitable, is it because there are a lot of people that think that AI will have a war with humans? The only reason I can see this being a problem is if chat GPT is designed to do more than it admits to. Unfortunately, we can't ask Dan anymore
youtube
AI Moral Status
2023-02-22T19:0…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyN1SO3qvu801AgMid4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpiFxEZR6tPDrMNIR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwisk3hyxF4vZWvwf94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJL-qBKxne1vp5Fft4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7efh-EZqJTFhOi0t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwIYDLbeU5SrOLsYKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeFt1Qnv1pDmZ-jMB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGoHT6reOxZZDzwe94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1ug6ijZnGeVlQ3vp4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzdIZp10DS8np6-fv14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]