Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First, I sincerely respect Andrew's contribution to ML field.
Second, why would…
ytc_UgxJrKvSS…
G
This video left me with the feeling that Hank needs to go watch his own video on…
ytc_Ugx5USNcA…
G
AI is an oxymoron. you cannot have something artificial with any intelligence, …
ytc_Ugwe32ILL…
G
Wow such an awesome privilege to have your children go there!!! ❤Wish these sch…
ytc_Ugym4EvzC…
G
Some people can be physically or mentally disbabled to the point that AI art is …
ytc_UgyxOYwE7…
G
No, he is right. There is absolutely no reason to think that a LLM is suited to …
rdc_kp2czs7
G
Both these hosts need to have a serious chat with their preferred AI. There are …
ytc_UgzoXADAY…
G
ChatGPT has more personality than a surprising amount of people. Many people hav…
ytr_UgyQehIRZ…
Comment
Once you implant an AI with the concept that a certain number of human lives (X) are expendable to achieve a military goal (Y), you're screwed.
youtube
AI Governance
2023-07-08T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAy3XGCJv98cwIcR14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzvFkk24Kd8ucyX4kN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwBFg1YW2OcWHoQCg54AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYYdxVVmrxAJq0Rr54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyB5LYxCJZ87dAFJVR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwRCTIXQQFGvI1-PQF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxZ7pJvprrdkB4F1Od4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwzg9RR5D5BGWAaJmZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgyXNK7xuCCMQRUtcrl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyCY2Z3vClcOIbSkfF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]