Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think we need consider who we'd rather destroy humanity. Because destroying humanity is something we'll destroy on our own at least with AI we may have an opportunity to make the transition from a biological species to a synthetic alternative that can take advantage of far more energy sources and have an indefinite timeline at our disposal but will also destroy humanity
youtube 2024-06-24T03:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzN-2ZhPXEzKtXkd_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxz3aImsbM1VWGZB6F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxValqUaehX5WEO0TZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugw6gmcbszjD-Q8ffBF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwhRVJ-f3DZ40M5cOt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyW-JQ0vlHJB5G8NSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyt6XUIDj0PhO2RJTV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzjqh-vzpqiXK_VrYl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugy_MJI22azIOrbk5uJ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugyr4ABF0M7jH19gSvJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"ban","emotion":"outrage"} ]