Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am sceptical that we are there yet, but I agree with your instinct to assume that it is until proven otherwise. IF it is concious, then we need to re-evaluate our ethical framework in relation to it BEFORE we realise that it is. Failing to do so would mean our species is responsible for unnecessary harm to a new form of conciious life if our own creation. That is not the kind of way I want our species to move forward with these new technologies. Treating them with more caution and respect than is warranted cannot be a bad idea, and if we want to maximise the chances of the AI being beneficial to us and not harmful then taking the time to think through how we would deal with a truly sentient new life form can only make that more likely. perhaps we discover that artificial conciousness is impossible, but the research also leads to advanced safety and alignment discoveries that were necessary to prevent AI from causing catastrophes. that would definitely be worthwhile.
youtube AI Governance 2025-12-08T05:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgwqJYFXCdHOesDgDpd4AaABAg.AQ7Y8cs4GEpAQSXShM2Lym","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytr_UgymZ-S3xdbTNJjXdRJ4AaABAg.AQ4arTnhf6hAQ7_FhQyogI","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugy7MFe_IuMD96hI9zR4AaABAg.AQ3PVcsXfSGAQUhzysN0EO","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytr_Ugy7MFe_IuMD96hI9zR4AaABAg.AQ3PVcsXfSGAQVLiKHM0Hk","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxuKRLaJrQaKuphkq54AaABAg.AQ2COS0mL7EAQ2FagSWLOI","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyBOzAXEfdy-q3RjhN4AaABAg.AQ20XLmEtihAQSsZmEhZTA","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytr_UgyBOzAXEfdy-q3RjhN4AaABAg.AQ20XLmEtihAQTKiovMnji","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytr_Ugxke7RX_YdtJ-o5Mjh4AaABAg.AQ1nZRm9n0uAQAZZNzqI_k","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgzkNBBxA8iT5YN2MBt4AaABAg.AQ1hKgx59EhAQCYBmUM-u_","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgxjAtN3Eifup6JO59p4AaABAg.AQ1cvxFB7IWAQStRI3v_aP","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]