Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We have not created AI yet.....
What they are calling AI is not AI.... just lik…
ytc_Ugz5ZSC-S…
G
Anyone who believe chatgpt could be conscious is very stupid and doesn't know wh…
ytc_UgwMOWFg_…
G
The prediction that 99% of jobs will be replaced by AI (robots) is absolute balo…
ytc_UgzuVoeUQ…
G
There are no methods to even enumerate risks never mind evaluate them. AI is hum…
ytc_UgwdoZaN_…
G
And as for the facial recognition wake up there collecting everyone’s face and l…
ytc_UgxD8zD2B…
G
I didn't mind the AI's at first cause they were mostly used for memes, not meant…
ytc_Ugw9ehXUy…
G
I'm not a philosopher by any means, I don't really know the formal definitions o…
rdc_deu7etd
G
Why Human being is developing AI? Answer For making better world i will say a b…
ytc_Ugy-a9G9r…
Comment
You really need to have Sam Harris or Eliezer Yudkowsky on at some point. They're even more on the doomer side of AI and they're deeply thoughtful about it.
youtube
AI Governance
2025-06-17T11:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwAiGC1TXKVxyNvxGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxbcyXm0zYItadye_N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRI6Y6WkAi_70Q1nh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwHwjplJBxe6H_PSwN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxox7PzZIeaD6kmIRl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy1Ha24gD6NZVGjrOt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBWSDQMfUL7Ckyb9p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBWnjvOeu5pxzdg894AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx43vFV8_0-3AfjulF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwe9SEfMvZAxxSVDxx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}]