Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The reality is a key part of all HUMAN cognition is risk assessment. That type of risk assessment is not inherent to AI. AI has no preservation of humanity check on it's decision making. AI is also making so many decisions so quickly that it is possible that even if it was undertaking the same level of preservation of the human species check on it's decision making, that it could literally through sheer volume of decision making still make a fatal error. By the way, the 1 in a billion risk analysis part of this podcast is something I disagree with. The risk doesn't compound by the YEAR. The risk compounds by the NUMBER of DECISIONS. If we exponentially grow AI each year the risk compounds with every decision being made. The risk compounds at essentially an uncontrolled rate. Even if each individual decision is relatively safe, the risk comes with the VOLUME of decisions. That should scare they hell out of everyone involved.
youtube AI Governance 2025-12-05T17:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyYLqGdCiCDXwFe9XF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz7fLFhx-ZTY30iDhN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxEM-v269J50Zg8nVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwUs9VJolAwZ9JtCyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwhwYa1mJw-YQBqaUd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwZk7orM3w14Q2X7gh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxgEyJfGAm80Q7GWsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzuVBF8JpP7Ae8bqKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSxefe9LeyYNEkVhZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyDILA_4Ia8orIT_Tt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]