Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At this rate humans will be extinct!
People need to stop talking about change a…
rdc_fwgljv1
G
AI isn't gonna take jobs away and make *us* into an utopia, AI is gonna replace …
ytc_UgwLMBgES…
G
This is obviously AI. Robots arent this advanced yet to many talk shows have con…
ytc_UgweLklSX…
G
This is the perspective of an artist. I like AI art because I can find things i…
ytc_Ugz72IReH…
G
predictive policing sounds like a thing that can be heavily abused at a later da…
ytc_UgzWpu9l9…
G
Read up before lecturing, please. This whole "AI" doesn't work the way you think…
ytc_UgwdbIWh1…
G
make AI safe? I agree, however, just like if the US slowed down AI and China wo…
ytc_Ugw_fFn0g…
G
I don't believe I think there are people who, or possibly even one tech person r…
ytc_Ugxzvlw1f…
Comment
Could be a period of uncertainty that will accompany the development of artificial intelligence by humans until this industry and its local and international laws mature.
Imagine if a miscalculation or technical error due to a lack of information during the Manhattan Project led to the destroying San Francisco instead of Hiroshima, for example!
Interesting for Yuval Noah's thought process! It's entertaining and thought-provoking! Thanks to him and to the program host.
youtube
AI Governance
2025-07-19T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxqLUsQczDeyH1TcP14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxBFhl7yyKq0X5exBd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyws5GHVczVykPELb54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwCPaFHEqS6n2rhCMp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxW5hAHr5rosUJGUb14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw82cnI-LJ88JqmfBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxGEKpjaTwTnYTXM9t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgycWplKSg8F2Qj_dZ14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxTXUBUt25uTX9XUnV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_Kb3haJ5yo6okG5d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]