Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I wasnt super worried about exisitential risks to humanity of the newest AGI contenders until I've seen the massive push towards Agentic AI happening across the entire tech world over this year. I really don't want LLMs or any future deep reasoning model to run or be able to interact with essential infrastructure, but i guess with the public release of MCP that train's left the station for good. Everybody and their mother wants to build agents now, may god help us when power systems become vulnerable to prompt injection. I'm still grasping onto hope that AGI (or superintelligence I guess) is a lot harder to crack than what recent progress would suggest since from what I can tell, the raw performance improvements of LLMs this year have slowed down and energy needs grow astronomically, which would not be helped at all if the AI crash does happen eventually as by my understanding modern LLMs have not had a return of investment yet. But what do I know, unlike Soares I haven't studied AI for many years, and from what little I know, he truly is an expert in the field. So I will take his concern in the matter seriously.
youtube AI Moral Status 2025-11-06T23:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzvBR3xx6FYttlkxYt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwQSTTf7v2pVbA_xQV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz2BPndcHlNRgzcuZl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxfn1OzBs-TIQjcWXJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzHTHA9O3SBJgyUy4J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0GDbnzyQM4Vdlb7x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy7OHLiO9WMY1At9Dp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzZhAxON4WPIXn7HDR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw_DQynDVHjJvl2m9t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UgywzpC9VikcdWfMXuV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]