Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is like the 4th lady I've seen downplay the existential risk of AI. Like why? What are you accomplishing? How is that a "distraction"??? You don't think agents that are able to act independently as something worth taking the safety seriously of? People think the AI images and generated information is big, but that's just a small piece of a much bigger project. You put the language model on a robot it can think, you give it AI video it can imagine and simulate the world. Now you have infinite synthetic data. People underestimate because they are looking at things separately and not how they are all being put together. Add to the fact that these things are going to be everywhere and self improving and you are going to be more and more concerned as things compound which is why the people on creating these things feel existential terror as the technology improves, but we have yet to find answers for the millions of questions of just how to survive the near term let alone the long term.
youtube AI Responsibility 2024-02-20T07:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyhX1bLYVaXaWys16B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwJ6XXnt3BknYD75194AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyd0VOOFhIgKWV_qDN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyzuHjd9BKtUxlQSLt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzW5jSwYFEbumylX3V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx2gp957etl9p3Ck1N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw7HpySi8YMZLCCjNx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxMS6s7X58GmHFoiXl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyx-4wX03RPyG2pmFN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxzYJCJ70faZQaS5nF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]