Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How fast they are expanding??? They plan to go from 1500 cars to 3000 cars in 2…
ytc_UgzpuhQNm…
G
AI is awesome 👌🏼
Copy rights can't be called if original image has also used re…
ytc_UgwALUXpS…
G
@asas-xn4qkThat's a LOT of cope in one comment. AI bros can dish criticism, but…
ytr_UgwSvfQZi…
G
If only we had ChatGpt when I was in school in the '80'ies and '90'ies....…
ytc_UgyAtHuZs…
G
God why tf did I ever listen to my dad who was embarrassed to work labor jobs. I…
ytc_Ugzwa4FG_…
G
People who develop software should not be placed in positions to preach about th…
ytc_UgxLk3VDU…
G
this is so clean, what’s your process? I do a lot with midjourney v6 and share t…
ytc_UgxuMZrwX…
G
Was exactly my thought, who will buy any products, if all workers will be replac…
ytc_Ugyif1NOa…
Comment
You don't ask the pertinent question. What is the way in which A.I. will physically kill the humans? Does one rogue A.I. convince a very special human with the power to do something, then go out to follow specific orders from the A.I.? Or does a company start building robots and downloading A.I. programs into their "brains" who then coordinate destroying the humans? I need to know the physical parameters of how this "kill all humans" thing happens.
youtube
AI Governance
2026-02-25T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxuSEuDMEC0tjiJE9V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTCxNwGq_lgqDh3Kx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAx2Qpr6NczK02Snl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNxLLZ3dY_a9Gt6Dp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzinWPBk9jl7p9eQvZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxX6Vu6aMBrr_4GDv14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzdCq2DGG1FE1ibytx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwWW2Il3p8Faim5A6t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzsHTOyhAvQG85uZP54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx849mvh7WM2eMtkQ54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]