Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AIs analysis of his claims: Short Fact Check of the Video Most of the claims in the video mix real AI concerns with heavy exaggeration. Here is the honest breakdown. 1. AGI by 2027 Some prediction markets say this, but experts disagree widely. This is speculative, not settled. 2. Ninety nine percent unemployment in five years No economic model supports this. AI will disrupt jobs, but not wipe out ninety nine percent in that timeframe. 3. Robots replacing all physical jobs by 2030 Robotics is improving but nowhere near that level. This timeline is unrealistic. 4. “We don’t know how to make AI safe” Partly true that safety lags behind capability, but progress in safety research is real. Not zero. 5. “You cannot turn off AI and it will preemptively kill you” Turning off a widely spread system can be hard, but the rest is speculation, not proven fact. 6. AI causing extinction through engineered pandemics AI could increase bio risk, which experts take seriously, but extinction statements are extreme and not based on data. 7. “We are almost certainly in a simulation” This is a philosophical opinion, not a scientific fact. 8. Bitcoin as the only scarce resource Bitcoin is scarce by design, but many real physical resources are also scarce. This claim is exaggerated. 9. Claims about Sam Altman’s motives These are personal interpretations, not verifiable facts. 10. Singularity, immortality, and 2045 predictions All speculative. No evidence supports these exact timelines. --- Final sentence for you to post The video comes from a serious AI researcher, but it presents the most extreme interpretations and timelines. Some concerns are valid, but many claims are exaggerated, speculative, or not supported by current evidence.
youtube AI Governance 2025-11-21T11:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzNnxQvUOlPKlSOLVJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwzDmdHE3ODa_jGpkF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgznxJ7pznEOdQ8T7-J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyXrWhTkFFlExi0Rwx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugylnx4UIWhl-fdb8bJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwq8ViqYLn_5C8cRut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNC9ctzPZm3_i_M8B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMNZg5u0sh1E8mnVR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxzXnbsGiTbDLDfPfh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyF9nT2fB47yiPsSPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]