Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Practically it's not possible for an artificial intelligence to trigger an end of the world artificial extinction event not with our current leves of technology we have machines and robots which are good at certain specific tasks ...thats the main barrier and these chat bots are dumb fu*ks they are capable of nothing the main pain in the ass is the military ai. If they gone rogue they can start a nuclear winter but those things are trained differently and probability of open ai infiltrating military network is something i am not sure of. These regular ai s can go rouge and slow our progress i think but in case of performing action what can they do because a helicopter can only fly and a home pc doesn't suddenly grows arms and legs first the end terminal machines need more advancement to a level at least 20% of Transformers so doomsday can happen but skynet is still 200 years in the future...more realistic scenario environmental disasters will significantly reduce our population by then.
youtube AI Governance 2023-12-12T10:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz4XdVajD4Cqx6u2kx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWFz_aWDnQA0GT4kR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxq-LGFIeXXSnk9Xn54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgwZhLbhamsHRfeuDGt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxtsPfw4nK4IlK_Twx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugym3ljHtGQv-E52PZp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6ZEEU5hjecBn3xrt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwCKNI_2-6kzC3nxnd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxZ21-qpMzgkzSFXhB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw5rrhm07zuygRZc9Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]