Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I've heard the Chinese Room thought experiment used not to prove that Turing-Test-passing AI's are not necessarily strong, but that strong AI is impossible. Which is absurd. The biggest problem with the proper Chinese Room argument is simple. If you define "understanding" in such a way that the Chinese Room does not qualify...how is it that you could tell understanding from false understanding?
youtube 2016-08-09T23:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgjlGx8FR-EgZHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugi-CMHZ6z1IiHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugi7CjBupUbtHngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggnfU6yPgq2B3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggU9g4favmQ-3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgihvWXlqNA6T3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgjV46XtY-kr1ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggFxLep9Z31AXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggloAGB5WNOMngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghR_DYsydJIdHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"} ]