Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Another thing that's ironic to me, is that we fear A.I. taking over. There are huge amounts of content on this premise. And as an example Hollywood isn't showing any signs of slowing this topic down (see "The creator"). While really entertaining, here's the irony: we're training A.I. on our content, so it can better emulate us and in some cases already surpass us. Aren't we teaching it that there is only one possible outcome? Exactly the one outcome we don't want? Wouldn't it be more responsible to create vastly more content where it shows a future where humanity lives in harmony with A.I.? I get it would be more difficult to make money of, but it might just save us in the near future.
youtube AI Governance 2023-05-26T05:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwkWturiTw6NsXIsGJ4AaABAg.9qBIlcxnCpM9qvPaR8kVeC","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwTsRy-bB49HxDpvdh4AaABAg.9q9wbD-Tt2F9q9xZE2Giy2","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytr_UgwTsRy-bB49HxDpvdh4AaABAg.9q9wbD-Tt2F9qPhbc07kTF","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgyE1MuKqINRtWT9MH54AaABAg.9q4npuUxAtk9qoM3igurqV","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytr_UgyE1MuKqINRtWT9MH54AaABAg.9q4npuUxAtk9qp5BHuwHRN","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgzFIbXaXWy3i7-yaPh4AaABAg.9q2s7oiIlZO9qjQ0EXQc7x","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzFIbXaXWy3i7-yaPh4AaABAg.9q2s7oiIlZO9qjQCft8woS","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzFIbXaXWy3i7-yaPh4AaABAg.9q2s7oiIlZO9qmInA-6kev","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwBE7LBifcD-HOOLGx4AaABAg.9q1T0Y5NWTj9q1UE5dIEGI","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytr_Ugx8twS2rXvKhVUYCkV4AaABAg.9pxy5pqff7R9pybHjzVRs0","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"} ]