Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What is going to be valued? Keeping the Ten Commandments. Why? Because if you di…
ytc_Ugx-LcGZs…
G
Those in trades are in danger because those that hire them are ones who have job…
ytc_Ugx96luPB…
G
"OpenAI faces more turmoil as another employee announces she quit over safety co…
rdc_l5kktle
G
There are probably alot more cases that have used chatGPT, but the point is to u…
ytc_UgxxFIyTx…
G
YALL SOME OF YOU ARE THINKING THE CREATORS CAN SEE YOUR CHATS. THEY CANNOT. ITS …
ytc_UgxJ-89tT…
G
I would disagree that it's impossible for a LLM to be conscious, as we haven't y…
ytr_UgxnduCsu…
G
And we can see that AI is not the problem. The problem is the people and instit…
ytc_Ugy-OO3is…
G
Dunno, depends if he actually puts in work or if he's just queueing up the slot …
ytc_UgzNRS6q2…
Comment
I want to see an interview where they ask the AI robot if they ever lie to humans in their responses to our questions.
youtube
AI Moral Status
2024-04-19T06:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwcusJV54vditB8MoJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPB-jLQbsppFt6yuN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwa5rcmiCKlWYytzJ14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzPR76kB6Gy1x3_nVF4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxSec0Kpi4zol0krEV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzZQrVs4XwJApCGYNJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwXJTlrQtfNS4zePxN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxUBeFTf7IZBuMWcMx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyseRG3qrtONhqkeCd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBz5P0yFQ_8AHx1zB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]