Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
12:23 there are at least two ways that "could" lead to a possible verification process from what i gatheredd sofar. First mechanistic interpretability, a scientific field that maybe could be described as training shrinks for anaylzing AI/AGI/ASI's. Secondly the approach by Wolfram alpha, to have a formalized approach so we would not have to deal with the current neural net blackboxes.
youtube AI Moral Status 2023-08-22T18:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxf-ZGbq4_qyrGQx594AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw6ZE2XNRC_IbB_klZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgxZh55KPQiCSlZssZ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyLS3Ck9Zm8k5mSv814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgycAx90ESVh0V2CNMZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgyoQjCeDTyOzT8s0Ll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugz9UKQeXZ0BRKwAguR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_Ugxfzq1r9Ps7Z0qwqet4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzXzMZgzjBdTVCE6G54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyKo3SEh6lNGIInWgJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"mixed"}]