Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
>Has anybody else had to deal with near total blackout of AI tooling?
Yes. S…
rdc_l56ug2t
G
I take a lot of issues with 6:00 - 7:25 . Factually this is just not true and I …
ytc_UgzJ--lj7…
G
People gotta realize that AI is like a baby (or mirror): they act based on what …
ytc_UgwPWBa2E…
G
4:00 — Advice for the Next Generation: Be Curious, Use Tools
To young people nav…
ytc_UgyBmtALG…
G
So... the Silicone think replacing human drivers with robots will... improve hum…
ytc_Ugydj4ked…
G
I rode a Waymo in LA with no problem, got me from Venice Beach to Inglewood in l…
ytc_UgzNlf5VS…
G
The real danger comes when AI is implemented on a quantum platform. Then the un…
ytc_Ugzm3XB7n…
G
The other day I asked ChatGPT to write me a corporate email out of some bullet p…
rdc_jsylgd2
Comment
"Money has continually overruled safety" No shit Sherlock, it's been happening ever since money came about. Why would it change now? The global stockpile of nuclear weapons is over 25000, I wouldn't be very worried about A.I. destroying humanity quicker than us to be honest. 😅
youtube
AI Governance
2024-02-22T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxRqxDd7Svbz-nnCB54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxHupxsDR7QOEZ6AJJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugzv4MUvK9VKpv7TQLd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMPQpSAJH1hDbSMOt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzWo7F2XvPAzZhSwRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFaXh9UtYMQ_BJEAt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxoB_MLLQq4i_zGzI94AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxYaS_jClohPqStn7p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy5Vzz0DJ8uJQFgeEd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyynrobhL29r_eXs614AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]