Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like talking to ChatGPT as if it were a human. So naturally, Ima say please an…
ytc_UgzmsM5Y6…
G
the good thing that unlike humans, its much quicker to fix a problem in an AI
so…
ytc_UgycluAhE…
G
The autonomous part was fake. Musk wont deny or confirm the autonomous part was …
ytc_UgzBPYDmd…
G
If AI goes mean on us, could we just throw a handful of nukes into the atmospher…
ytc_Ugzap8Gn8…
G
Ai is rotten to the core it feels some times... eventually, you won't need night…
ytc_UgxyFVQjI…
G
LegalEagle needs to learn a bit about timeliness. Trump gets indicted, and we ge…
ytc_UgzyJvsWH…
G
AI art is like watching a power point graph, but it shows the most probable imag…
ytc_UgwEBJM2p…
G
Killing something is MUCH easier than taking it over and controlling it. AI taki…
ytc_Ugz4icBbm…
Comment
Finally someone addressing the mass hysteria. No, Skynet is not going live 😂😂😂
No one see e inherent assumption in “What if it becomes self-aware?”. Of course this assumes a pre-existing agency and identity for “it”. When in resort there is no “it”, nothing with an agency to begin with. Tress it as a “person” is as absurd as saying “what about my car? what if it decides to go it’s own way?” Or “what if this calculate just il and rebels against me?” It’s a machine . It doesn’t “make decisions”. And it CAN NEVER make decisions. Even so called “autonomous drones” xcannot “make decisions”. They merely execute their programming, nothing more.
youtube
AI Moral Status
2025-05-24T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxlPJyyhgCJiPyy_e14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6lt36DwPfB2glUBd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyPWjEhrFxE1uEef9h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzgNoojFrM1MLAHgqB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwXnXaX_TpcfKqhmhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4De9FKqdjeJHe1oN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgylBapm1J9G1TMMmEd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz70ZQNuWsFWFkE9A94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdX42VPyzQEKr0DxB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz9zf1-SWBk4c953F14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]