Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
School is mandatory that’s why teachers have a job, having a coding job is not m…
ytc_Ugwkc3p5Q…
G
I like it as a tool for human use, but fear it as a tool of human misuse. Elon M…
ytc_UgwUei_wn…
G
These autonomous trucks will not adapt to climate change and increased weather a…
ytc_UgyuUTLUo…
G
What is we make robot with right who look's like human. And robot like in factor…
ytr_UgzvdM7bK…
G
this is why i treat ai as a way to know what to research, rather than searching …
ytc_Ugy_NgzSM…
G
I wonder what Karen meant when she said Sam Altman became very “persuasive” in c…
ytc_UgxsnsZAI…
G
Its not made to take down something like this its made to take down the kind of …
ytr_UgwXeRp4B…
G
lol was just having a long as debate against someone that was okay wiht Ai writi…
ytc_UgzHEcKvC…
Comment
I would argue that it doesn't 'know' that it's lying in the sense that a conscious being knows things. Rather, it 'knows' things in the sense that a calculator 'knows' that 1+1=2. If a calculator was programmed to deliberately give wrong answers, then you could arguably say that the calculator 'knows' what the correct answer is but it is 'lying' by deliberately giving the wrong answer because it was programmed to. In the same way that ChatGPT is programmed to apologize. So the calculator would be 'lying' in that sense, but still not be conscious.
youtube
AI Moral Status
2025-01-24T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxa9Brcg1gtQpDYx114AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzhUtVNaCUhT6meh6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-34E1CZP0gCqZYFB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwcKcB-HkRvvDQqzaV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwln7PqL9aHe4eALgN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZlLkD7kACYddBO3N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5nVqGQ9LTynj8oNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2MP4fCMpzLMga1oB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4jAI5XoiC-5jr0nh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1yMo6AMaJ6nricCl4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}
]