Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm noticing something similar, but it's likely because we're both operating in …
rdc_oae856v
G
So he’s already done 1 thing out of the 2 things that will fuck the world up! He…
ytc_UgwzE9dyr…
G
The radio is going to make you dumb!
The television is going to make you dumb!
T…
ytc_UgxX8G8HG…
G
Good talk, really apricate Dr. Suleyman's perspective on AI. Maybe I'm bias a…
ytc_UgzCmuxHB…
G
A lot of productivity gain will come out of eliminating middle management and th…
ytc_UgxGyhNKU…
G
I’m documenting my journey with AI tools too! Rumora’s been great for its unique…
ytc_Ugy6kuRfF…
G
Just another reason why I'm glad that i don't use ChatGPT or any of the others .…
ytc_UgzNDO3iL…
G
No. He has been openly speaking out. But sadly these big Tec are more intereste…
ytc_UgxCKDIyl…
Comment
The professor is warning about AI having a life of its own and taking over the world. Why can't we simply pull the plug on it? After all, it's still in essence a computer program, nothing like a real being with evil intentions. I am more concerned about the internal enemy it could become. How insidiously it could attack individual sense of identity, replace many of social structures, and eat away at human relationships. This is the more dangerous enemy.
youtube
AI Responsibility
2025-09-07T14:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwgiF7p-zqVUgRwM3h4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgztEidSicyros_nuBZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3cLJbzP2yjFpIlR54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyWFj9NRKeOubqIUv14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5-F5lWCSukxhw9-p4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzONhkHJJz930DhShd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz0ph4OAcHonlNpKbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxB-lO7_BAq_PzwjpR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwndzHR8szx82HO7IB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwn4MF5y2Ke0TSdmjF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]