Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As someone who works in insurance from a data perspective, yes self driving cars…
rdc_dmpekgg
G
Because his definition of “sentient” includes the way transformer models work wh…
rdc_mzw652j
G
How about drive the car? How have we become so dependent on AI to handle the r…
ytc_UgzIbw6c9…
G
I fear not the AI that passes our best attempt at a Turing test. I fear the AI t…
ytc_UgzMl7heB…
G
The thing that bothers me most about the Silicon Valley tech bros, is that you j…
ytc_UgwMFHh0s…
G
If a person or group of people had ingrained bias in them, AI will merely reinfo…
ytc_Ugy6QwZWp…
G
Next time ask him for some examples of the "new" jobs that are going to magicall…
ytc_Ugx1cEQoc…
G
Do any of these horns or skins cure people of being ignorant, greedy fucks that …
rdc_dv63nxo
Comment
He doesn't really explain an exact scenario where AI could end civilization. To me, aside from it gaining access to nuclear weapons and then using them against humans, the biggest threat AI poses is eliminating huge swaths of jobs and causing mass unemployment the likes of which we've never seen before.
youtube
AI Governance
2023-04-18T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyj_tTfSgGyMtxlAdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwoLRzw2ap5zrPvH4V4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgysBwa0gi6BzIGsy9l4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCnM30GWAbZlHYCvV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-WG9DbFZ7aHz8c5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyvXjWs8F7O8leGY5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxosHWn_DsrBDIymjR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx71RC5C4RskOf4cE54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw3QgyqjFvVSTifrkN4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzvfHR0Rsy-Eu_4DRV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]