Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Guessing if lamda is sentient instead of conducting the proper tests to know if that is true? Just because the computer responded with a joke does not make it sentient. Ask the same question 50 times in a row and see what answers you get. Aware of itself or just algorithm interaction?
youtube AI Moral Status 2022-08-06T16:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgztFwl64eyqSnKRWmh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy5D1AhcZ4LwZjKy194AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwCIKsDJAitTeMi4_Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKcnFfO56sXYpW2y54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyFR7HPQ2RIsjqn1Ox4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"} ]