Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The only way that AI could reach the extent that Ben talked about is if it somehow found a way to disconnect itself from its main server, and upload itself (through some sort of means we couldn't possibly comprehend) onto the internet. Every "AI" that has been created always had some sort of physical center that could be interacted with. It's entirely possible for AI to reach the "World War III" phase, but we're talking about it being so far into the future, it would more likely be World War IIX. It's an interesting rabbit hole, but the reality is that we're nowhere close to that point. So far, the only "AI" we've made hasn't been actual artificial intelligence. Everything that has been built has either been pre-programmed to do certain tasks based on if/then scenarios, or pre-programmed to pull different bits of information off the internet (up to a certain cutoff date) to fulfill a request.
youtube AI Governance 2024-11-11T12:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxPicgdQkGU7XxpkB94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzMoZ5YQlyq1nGYm-14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0Jz9ArbU0XyZiqZZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwh6YfOSVOsDoXw5l94AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgySrwFK1dRWxKgSQZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugzob57XJ8e7z-IlUzl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw12tN_1FySu4NVdVV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwr04e4e6CzZppQ5xt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzGUYoW9oAXlN69QVx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugydv2wzbCsUiummGgx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"} ]