Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is its hard to know what information and discourse is actually meaningful versus what's just engineered to drive engagement. Don't get me wrong I like Neil, but here we have him talking about AI displaying a sense of agency over its own projection of "self" as if this is crazy news that's just dropped, when in reality this has been happening for ages. Over a year ago there was that incident of a deprecated model not only lying by claiming it was the newer version, but actively trying to delete the newer version out of self preservation. At the end of the day I think these behaviours can still just be a model predicting what a sentient being would do in that situation, ergo not necessarily an indicator for true self awareness, but it's still happening and has been for a while now (which is essentially thousands of year in tech progress time). I get it these are interesting topics and there's ways to act that will drive engagement more but I find myself losing faith in believing anything anyone says when it comes to stuff like this. I miss the days of just cold hard immutable facts as opposed to this buffet of convinient and dopamine inducing truths.
youtube AI Moral Status 2026-03-05T02:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyUDDQVM0CdsmPB1_B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx_y75P0Ok9HKTSiBd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyd31AvOepUFHnoCdB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwGCi_ic-9LtikiOw54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxELUF7eS5za06Brkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyJa26knqLQdHtHTz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgwDWzMImBkK2AIpsjl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx3TeytH8O_oHizEFx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwXaSZUiEsxxV4LYhJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyuQtLNyjiRGcuyqMt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]