Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What q lead of crap. They probably completely chatted out the core algorithm an…
ytc_UgydXXyzn…
G
demon is there, AI is gonna do his work his voluntary. Look up the AI robots, th…
ytr_Ugy4sfm3d…
G
Good thing I’ve never asked my ai chatbot about anything illegal- which it’s not…
ytc_Ugw2NkSYO…
G
Many people confuse gen z with these though, many (including in this very thread…
rdc_ohqeqmb
G
Definitely this. The main difference between developing and developed nations is…
rdc_fwhs7og
G
ChatGPT isn't really trained for that task, it's trained on a huge amount of tex…
rdc_jipgf5d
G
Yeah, AI is cool to a degree. This video proves that shits already not safe to b…
ytc_UgzZos84c…
G
We appreciate your observation! In this video, Sophia was engaging in a conversa…
ytr_UgwS1eWV_…
Comment
I think the biggest risk with current AI is just people thinking it’s smarter than it is and giving it decision making power over something really dangerous. The best models still make a lot of mistakes and because they’re basically just guessing and don’t actually understand anything, they sometimes just guess wrong on things you wouldn’t expect they could mess up. But it’s all just probability. Like, you can ask pretty complicated physics problems and get ok answers. But every once in a while it’ll tell you that gravity makes things accelerate away from the earth rather than towards it because it doesn’t actually know how gravity works as a concept. It’s just playing chance.
youtube
AI Governance
2025-10-15T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]