Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Finally something that I actually can talk about because I’m fascinated by the topic: people saying ai lies. I still don’t really believe in calling it lying because like. It’s a language model. The computer literally has no idea what it’s saying. Basically take the thought experiment “The Chinese Room” for example. A person is trapped in a room with books in Chinese and is told to write appropriate responses to the slips of paper slid under their door. This person doesn’t speak or write Chinese, but all the slips of paper are written in Chinese. So they look for those symbols in their books and write the responses they see. But obviously they don’t know what they’re saying. And the only way the people outside would know they’re not fluent in Chinese is by knowing what is going on inside the room or seeing that their responses are odd. Chatgpt and other bots are the person inside the room, albeit they go through their books much quicker and will make up new sentences based on all the data they have. But they just. Don’t know what they’re saying. So it feels wrong to call it lying. If I meowed at my cat and he thought that meant I was about to feed him when I wasn’t, it’s not really lying because I didn’t know what I said. It’s on the shoulders of the consumer to understand that the program has no way to differentiate fact from fiction.
youtube AI Responsibility 2023-06-10T22:0… ♥ 65
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgwdmKCnvQT0JjAa-zN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwWZ2rs2CqHjrt4BEF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwy7KkrxnlZ-hlNYS14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyMFfYsT5hfjJfYcAp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugwchx_uwP6GCZ7cNeF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyyZkQyiryDhtRG-Xx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_UgwQb-itbfyMAHrzgpt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugymn0QIKz6ogckY4Tx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgwUXoXj-NsPx9H0G414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugy4K-VVV4TSu0CWJnZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}]