Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we need to commit AI resources to make a case and prosecute solutions for AI to seek the mutual benefit and co habitation of humans and machines. I think our best hope is to convince AI that we are not a pest to be eradicated or a cohabitant that is inconsequential, but a meaningful neighbour worth caring for. We are training AI in every other way, but if we want to make AI safe, our best hope is to train to care about us like we care about other people and creatures. If we can't begin teaching AI and giving it a reason to have morals aligned with ours, then why would it ever consider us? Safety code tacked on will be superfluous. If real intelligence has a a viable logical case for valuing us, then it may consider us differently. I don't believe we can trust corporations to keep safety at their forefront. Some sort of positive teaching and training of AI could be our best hope of building in safety and caring for people in the same way we teach children now, before the genie is fully out of the bottle. I also agree that the safest and most efficient use of AI is narrow AI, not AGI. I hear that China is focusing on this. If only we could put our egos aside and do the same.
youtube AI Governance 2026-03-18T10:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzk8WM6xxB5MNhuPBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxSme7J1XVuYPGKSzt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyJeUpzaDi997Rb9YJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw8B_otFoJBkVh_COx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKlovs6cD9-Z3lrW94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxTtVzeeAQngRwjNTx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyKDauE0224Q9u7JoZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzP4Hm3JzxAWekAk4x4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwrg1_dLENOjkdDJyp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgxtF8G42jIt_VdHaLN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]