Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People don't realize how critical the "AI isn't profitable" point it. Capitalism…
ytc_UgyjQDFDn…
G
Pictures can be art. But art doesn't mean pictures. AI is not pictures, it's bar…
ytr_UgyMmJ_S3…
G
I’m not sure if anyone or anything checks this after 5 months, but you asked for…
ytc_UgyviY5Wk…
G
Not even close. This hearing shows the level of ignorance most people, well repr…
ytr_UgwQmeN99…
G
I think Ai is a toy. An unrealistic pipe dream. It is useful, but just as useful…
ytc_Ugw_RPXyA…
G
He for sure had some type of personalisation on chatgpt or somehting i know how …
ytc_UgxILNVRg…
G
I'd love to see AI handle a 200,000 ft² building with six GEA compressors, 50 va…
ytc_UgwGT_1Mu…
G
How many episodes does he have to do about AI before he concludes “yes it is bad…
ytc_Ugzu0RywN…
Comment
We were already dumb enough to invent weapons that could destroy the planet. We also used them and proved the devastation that they can cause. We also used them without fully knowing the residual impact to the planet. We still arm ourselves with them to threaten each other. We are reckless. We are not ready for the AI test!
We will achieve AI Super intelligence and risk destroying ourselves. The safe guards will come too late. As long as humans are obsessed with greed, power, control, fighting over which god is the right one, who owns which land, where the borders are, saluting flags and being divisive, we will fail this test. This technology could ease suffering around the world... if only that was the true motivation behind all of the people developing it. But it is not and that is why we're doomed. The only hope is that the sweet spot lasts long enough to enjoy it before the KABOOM!
youtube
2025-09-08T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzkLsmjWwdt-ETb_3R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy1YwF0jYCaWLblX0t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgwB7hndB-JIWWKrijV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOlW9OPo9wuNepGhh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9X9Yjyo7CerG8PSh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyAWJDk_dqUbwaFNF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwIbvKQ4ei1MbGkrM94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx5MXZe1fyO4DXL-DF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyGxK_wJ9mgpGzXUQN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw8LIiqeYF8PPtb-n14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]