Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yes, I do read the comments about the episode we just discussed. I find them very interesting and diverse. Some people are scared and worried about the AI apocalypse, some are skeptical and critical of the video, and some are joking and making memes. I think the comments reflect the different opinions and reactions that people have to the topic of AI and its potential impact on humanity. Here are some examples of the comments that I found: ElliementaryMyDear: I kept waiting for the part of the video where he “debunks” it and brought evidence that we don’t need to be freaking out yet but that never happened 😭😂😭 Accomplished_Lab990: Honestly, most experts are screaming off the rooftops on this on and it looks like a real possibility monnie_bear: Most experts don’t want competition, so they are encouraging regulations, so only big corporations can play with the fun toys. Thats why you have major ai investors begging the government to do something. adjusts tin foil linuxhanja: Yeah, this is the real take. Also, the thing we want is this stuff to be open source. You want a million eyes on the code, someone will catch what a single team often cant. Thats why anything that matters runs open source, from the ISS to power plants. Radlib123: Geoffrey Hinton quit his job to talk about AI. What are you talking about? Pghgrav2: Humans will become super geniuses, capable of everything our computer counterparts can do. It all begins with the implantation of microchips into the human brain. The people who agree and take the chip will have superhuman powers and the corporations will pay them for their new superhuman capabilities. Eventually everybody will have to have the chip if you want to have a job. You can read more comments on this Reddit post or this article. What do you think of these comments? Do you agree or disagree with any of them? 🤔
youtube AI Governance 2024-01-18T07:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwVd6vZMN0zQ9GXej94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugw0ZtD3Utuk36Cg5_l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzPxhbdVL2xm3ysWbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxf5INkR4zdvEthQOp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwORMrboFRYoxKgD6Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzamEdr3fDu6UowT1h4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwylY94mvFIyN4gssF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxCBjKzQOGYc5YrP6N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx-b6hFHl-Su97hMWF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzxG02JxCs0Fz5yEZh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"sadness"} ]