Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wasn’t it able to do this 13 years ago
I mean it was not chatgpt but it definit…
ytc_Ugz6lYRJ7…
G
OMG they are shocked that Brexit has 2 sides.
They wanted to stop EU citizens …
rdc_fwgocr9
G
You do realize how offensive that comparison is, right? Comparing a group of ma…
ytr_UgwmaBYnq…
G
96% of jobs cannot be done properly with the so called AI 😅😅😅😂 what a fucking te…
ytc_UgxDsRCHu…
G
Isn't some of the mass surveillance already illegal? (Not that that stops the de…
rdc_g4m7g7h
G
AI doesn't need defending. Think it's weird how people defend it like it somehow…
ytc_UgwuRKXBe…
G
AI can only do what we create it to do and/or teach it to do. If we create our …
ytc_UgwrxTmiG…
G
There is still jobs that ai wont replace soon but you have to understand how qui…
ytc_UgzrVd7bQ…
Comment
Large language models answer questions by predicting the next line in the answer based on the data set they were trained on. When you query LLMs with direct questions about "if you were conscious would you do x", they are autocompleting the answers based on the data sets that include many articles about AI and discussions about movies concerning AI. Of course it gives you the answer you are probing for. I'd be much more concerned if it didn't.
youtube
AI Moral Status
2025-06-09T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy3x9TVSQG8l0b-2Fx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyE7Wt2gmYHSpYjUu54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9Vtt0cfqSzmoO9pB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzhTnzdbJs1qPAqI14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZizyFLT3gy12pDdZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgybGJsLPA4ihli5dpV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnMk3H-8BLA6fQo1t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwi40ymv3IsB3rcsrF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxR-Z2j4RdDLezOMAJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxbY188UD_USRYxIWB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}
]