Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is very concerning in regards to the race for ASI and the 'singularity' as the tech are referring to. Following leading experts, research and logic it is a major concern that there is no way to safe guard this technology as it is imposisible once it reaches past narrow and general AI. The speed at which an ASI is able to evolve means that it will evolve past the safeguard or measures put in place as it surpasses the human intellect. This is ultimately a catastrophic path for humanity and as many leading experts suggest, we stop the race fo Artificial Super Intelligence and instead focus on development of narrow and general AI that can be utilised. On the moral and ethical side of this current issue we face, the question is how do 6 main players (US companies) decide for 8.3 billion people. 8.6 billion people have NOT consented to this and when/where will be this discussion. The leading AI ethicist Tristan Harris is worth going to listen to regarding this issue and the current unregulation of AI technologies.
youtube AI Jobs 2026-04-09T22:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxaQQsCVyIDqSbcgVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyArH4TjH5-50X6SMl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwnIRSlpgkZgSX5PB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxNduw4rzMumQQwU-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw8jyd_yvIEp-35YVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyP8t69l8j3GAeKRyh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxTxfCluIi3YQybNNd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzO07jPqCO01qjs6k54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwAyOLTFDgnapV9ajN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyAmqBUbTjnQ0lYpoh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]