Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m so happy to hear leaders concerned. I do believe him. I have used chatgpt an…
ytc_UgxDQY601…
G
[It looks like this would be where they would be trying to make that case](https…
rdc_g58r001
G
This is so scary! At the 14:57 mark on the video CHATGPT sounds so much like HAL…
ytc_Ugy8LbXWL…
G
We are currently in Part one of ‘the Animatrix the second Renaissance.’ We all k…
ytc_UgzKNv5gK…
G
It's certainly a contentious topic, personally it pisses me off to no end mostly…
ytc_UgxSMuzLn…
G
The billionaires are barreling headlong into AGI. It’s like they don’t care that…
ytc_UgywCzJ0r…
G
Thats the problem with AI. It programs itself with just the perimeter given by d…
ytr_UgzAxdsUR…
G
brilliant students for the most part will continue to take their work seriously …
ytc_UgyrZzQ1Z…
Comment
He lost me when he said we are in a simulation! I agree with the risks of AI, but I don’t think it will ever outsmart humans. We are really underestimating the power of our own brains. An AI winning a game of Chess or Go doesn't mean it’s smarter than us. After all, we built them! They just calculate a huge amount of possible moves to find the optimal path. That isn't actual intelligence; it’s just a search capability. Real intelligence would be an AI learning from the same limited amount of information a human has, and still outsmarting experts. If that happens, then we can panic.
youtube
AI Governance
2025-11-19T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIShsqcD7dcQBvGgl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugww1DAAc2BvSJZqlzt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzaOXESeN0B7NVZYeh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgygW72jSt5Ymn0JP-d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVJwpA0bx50wm4AYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwx49GKJjItl65s_VB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYD3lM6h5wRimt9n94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2RRQqu3OjLDABX-F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXaVKqF9qdVg-nzHV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzOnUCN6AL8KXNpQax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]