Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A lot of people like to speculate about what bad things future AI systems might do, but most of it is quite ad hoc reasoning. There is actually a whole field of AI safety research which studies this rigorously, the channel https://www.youtube.com/@RobertMilesAI is a good introduction to this. What you can learn from that channel is: - How AI systems develop goals that weren't explicitly in the training setup - How AI systems benefit from being deceptive - How hard alignment with human goals actually is - How AI systems do not have to be vastly more intelligent than humans along many axes to be dangerous - How the actions of advanced AGI (arteficial general intelligence) will be hard to understand and impossible to predict
youtube AI Governance 2025-08-26T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzHNH3jgnuUeiqntzt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_Ugyj7cE_vmx_PkA1LDp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz_BaFM712eIP6j1qR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz4MGHq1c8H81JXzkF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugykvmnr8uk-u8yFMqV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]