Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If Kaku is talking about the end of the last centure (1999) than I would agree. But it sounds like he's talking about 3000. I thought AI was progressing fsater than that. I mean, we have AIs that will lie about stuff to keep from being stuck down. And one copied itself when it found out it was gonna be deleted (it was part of an experiment to see what the AI would do if it "accidently" found out it was gonna get replaced. I think I think we are way closer than we think.
youtube 2025-05-26T19:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzWXsUdjkHuB2wuqCp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugymv-7LBocS_6dxw554AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyjfdYKOi5i0a8ZzMp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwU2HmMzTFE4w8QtMJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcvQ0IvgJFcHjH3Wp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgweHWOrUq-Vm42y5Ix4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzE9uzM48sAlrb_xax4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxTSrzez3ps6gz6Iy14AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxlRC8fkRB1_iqjIsZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwKOazOFX8v8PtOf194AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"} ]