Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI isn't going to destroy humanity. It's reasonable to expect that by around 2029 (give or take a year), things will have stabilized. The initial knee-jerk reactions will fade, people will understand how to use it effectively, businesses will have either embraced it or moved on, and a generation of kids and teens will have grown up with it as normal. And by then, something new will likely be emerging to drive the next wave of innovation. Companies will still need engineers and developers to help shape whatever the next frontier becomes.
youtube AI Governance 2026-02-25T09:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugy_BxWIzW48C8tOHlB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxlIwijgiYmoUYe0VF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwU9brjyXaQQB8chgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwGCn0Yy-d3VhSR-mB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw7Rq7fChMg0dtZZFV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxFYhSLIVkY6Dlu3oh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw_PFy4UVHHuKdFu5l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxPylUV2bS3_0LUCg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugws-9lw50vSnMNX15t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugw4pNHPNMzXrF4T7Wp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]