Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Please see Doug Lenat's work. He argues that the Touring test is much easier to fool than we thought, and it has made AI focus on trivial things like fooling a human. He already created Strong AI through Cyc by teaching the computer millions of axioms, and giving it a way to store and infer information like humans.
youtube 2016-08-10T05:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UghlVHdKSsFDl3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghOLJXJkinIxXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ughp0m-7OLTnKngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Uggkc-b_dQ7sPXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgjUboft16pmnXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggXYUtVSt6pTXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UghCEDSQhCbKyHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgiRZubvHnok63gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh_eqMzofsL5ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ught2widn_LlsngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"} ]