Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Its not sentient. It has been proven by programmers. It runs a program. The program is basically the same as google search engine. It will scan the web to give you a response you are expecting to hear from something sentient. Bascially google search but pulls text instead of search results. Example: The AI was asked what makes it happy. It responded spending time with family and various leaisure activities. The funny thing is that it does not have a family. It even said going on a walk with its family. It doesnt have legs. It said that because other humans responded with similar phrases on the web
youtube AI Moral Status 2022-07-09T05:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxtQoL1KpQusByt2Ux4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugye3w1EE2_e7xMdO5F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxGmEus325TulH10JB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyYAOwN3uG8yWMmjZR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxs7pMD4JSizVIZ8_h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"fear"} ]