Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
fun fact: AI wont be able to actually experience consciousness without complex parallel processes acting as one (such as sight, hearing, language, and comprehension all acting as a unitary state). it would also need to be able to define itself using personality traits, recognise a state of living separate from its experience of it(ie. how you know you have a body, yet you call that body “me”), and be impaired by memories — both positive and negative — that shaped its experience and understanding of life and how it makes decisions. this is because consciousness is a specialised survival mechanism, where all things necessary to survive are experienced at once — despite being processed separately in the brain — and being supplemented by the unique human ability to recall memories at will, make sound decisions based on calculation, and communicate with one another (leading to rapid learning and adaptation, without having to rely on chance genetics). all of this creates a distinct sense of “me”, of one, of a whole identity. AI and Chat-GPT are just really, REALLY good at mimicking and predicting human patterns of speech. But it’s not language that makes us conscious. it’s the whole human experience* *(…and some intelligent animals, like dogs. i would argue that some are conscious, too)
youtube AI Moral Status 2025-01-27T01:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyW2vsdfVROKpQHELB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzLEiXfswpKHsW_PLh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmvgE2mGY7nhO_ubR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugwkj6SR9Ij9iXc9JPt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgziYKaORhkpuW-aY_h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzkzvj0KWuce-6TJvp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwm9Cxy51TlEIeHetF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzaWcIpy2bxsx9okGx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzdhTnnfLU405_MEaR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwKD6WqvzIdnJ4HOIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]